Skip to main content
Menu

The Scientific Literature

Nonsense Lives On in the Citations

It’s apparent to anyone who’s familiar with the scientific literature that citations to other papers are not exactly an ideal system. It’s long been one of the currencies of publication, since highly-cited work clearly stands out as having been useful to others and more visible in the scientific community (the great majority of papers do actually get cited eventually by someone, by the way). But anything that be measured will be managed, and managed includes the darker meanings “gamed” and “manipulated”. The classic method is to cite your own work to hell and gone, but readers will have heard of reviewers who demand that their own work be cited, of citation rings where everyone gets together to boost each other’s numbers, of citations for sale, of publishers packing their own journals with internal references, and more schemes besides.

Now, outside of this sort of chicanery, you see many other problems: (1) people citing things because other people have cited them, not because they’ve actually looked over said reference themselves, (2) people just flat missing things, relevant papers (or patents!) that really would shore up their own arguments but don’t even get a look, and (3) people citing things that don’t necessarily do the job that they seem to think it does.

In that last category I put a special irritating feature of the synthetic organic chemistry literature, one that every bench chemist sees coming before I reach the end of this sentence. I refer to the nesting-doll method of referencing the preparation of some compound: instead of telling everyone how you made it, you just say that it was prepared by the method of Arglebargle, reference 15. So you go look up the Arglebargle paper and find that they don’t tell you how to make the damn thing, either, but refer you to Dingflinger et al. in the even earlier literature. I have had the Dingflinger-level papers themselves send me to yet a third reference, by now something written during the Weimar Republic and of course containing the finest spectral characterization data available in 1931, which ain’t much. Would-it-have-killed-you-to-put-in-the-procedure-and-the-NMR-data, etc.

So let’s make sure not to forget the major influences of laziness and stupidity on citation behavior. Those at least are honest; fools can be very sincere indeed. And those are really the only explanations that I can come up with for what’s described in this recent publication (commentary here). It describes the situation in the social sciences literature around the “Hawthorne Effect”.

Let’s start out by stipulating that the effect itself is a myth (one of several scientific myths the paper (open-access) references in its introduction. This one goes back to the 1920s and studies of worker behavior at Western Electric’s Hawthorne plant. You’ve probably heard of this stuff: among other things, the study supposed found that productivity increased when the lights in the factory were brightened, but also increased when the lights were lowered, and the take-home lesson, for decades, was that the knowledge of the workers that they were participating in a study is what actually changed their work habits. That’s not an idiotic conclusion prima facie, because there most certainly are observer effects in social science studies. The problem is, the Hawthorne work turns out to be a terrible example to use. The studies themselves are a mess by modern standards(remember, we’re talking about the 1920s), and the data are nowhere near as clean as the story has it. Referencing the “Hawthorne Effect” has over the years become a shorthand for just about any observer effect you’d like to have a lazy name for, and use of the term has been actively discouraged.

What this latest paper does is look at a set of papers that do that job of discouragement – works that actively seek to argue against the Hawthorne Effect and point out the problems with use of the term. So far, so good – this is a field trying to clean up its terminology and its thinking, and there’s nothing wrong with that. The authors identified three papers in particular that set out the detailed case against the effect and against use of the term, and then looked at papers since then that have cited these.

And there’s the problem. What they found was a rather large set of papers that cite one or more of these papers as actually affirming the reality of the Hawthorne effect. As they say, “a major explanation for the asymmetry between the affirmative articles and the negative articles appears to be not reading, or not understanding, the cited paper“, and by gosh, you can never rule that one out, for sure. It’s a remarkable situation, and of course it helps to propagate the very concepts that the original authors are trying to knock down. For example, the worst case is a 2000 paper against Hawthorne effect and against its very utility as a concept – when these current authors looked over the text of 196 papers citing that work between 2001 and 2018, it turns out that 168 of them are actually affirming the Hawthorne effect (!) Some of these (a minority) noted the reference as a dissenting voice, but others just blithely cited it in a list of papers about the effect itself. The conclusion of the paper is worth thinking about:

Of course, to assess whether the three articles were successful at communicating their critique of the Hawthorne Effect, we ought to consider the number of readers that has been dissuaded from believing in and using the Hawthorne Effect in their research. For all we know this group is in majority. It is, however, a silent one. When it comes to academic publishing, the affirming articles are dominant on the issue of the Hawthorne Effect, and are likely the major contributors to the forming of the published consensus. These publications, we surmise, will efficiently recruit new believers in the effect, and in turn new affirmative citations in the literature. The findings not only demonstrate that the three efforts at criticizing the Hawthorne Effect to varying degrees were unsuccessful, but they also suggest that if the intention behind the critiques were to reduce the frequency of affirmations of the claim in the scientific corpus, they may have achieved the very opposite.

This makes me wonder if the various articles over the years warning people off of (say) useless or inappropriate chemical probes have done the job that we’ve hoped for. The way such things keep being used is not an encouraging sign. Anyone know of any direct examples of this sort of thing in the chemistry or biology literature?

 

35 comments on “Nonsense Lives On in the Citations”

  1. jim says:

    Curcumin coming all the time? With citations, but may be people actually select references where an effect is shown, somehow.

  2. Anonymus says:

    Not the directly connected to the topic, but nicely illustrates the problem with evaluating of the importance of the papers using citations.
    I looked upon an article in Org Lett, seemed a nice work, it was the synthesis of “type xyz compounds” using “reagent abc”. We tried to reproduce the work and we did not succeed. As work was done by undergraduate student, it was a real option, that he was simply not skilled enough, so I told him to do a bit through checkup of the situation, including to do the literature checkup about the article, to see how reproducible the work is. In few years article got 90+ citations.
    When my student checked and read EVERY citing paper picture was following:
    More than 20 self-citations, I would say proper ones (not forcingly citing your own work).
    Majority of the citations (40 +) were of the following type, to synthetize “type xyz compounds”, various procedures can be used or “reagent abc” is used to produce various type of products. I guess none of this citation came further than the title of the article (or perhaps they even read abstract).
    More then 20 citations were stating that “type xyz compounds” have interesting biological activity.
    There were 6 citations (out of 90+), which they actually had direct connections to the topic, e.g. they were using actual synthetic conditions to synthetize the analogues. All 6 of them failed to use that synthetic protocol on similar starting material (but not exact starting material) in similar yield. One article reported that they obtained product in very poor yield (~ 3 %), others that they failed completely.
    One article even made a notice, that they contacted authors of original paper, got useful suggestion, however they still did not yet succeed to obtain product.

    Article had clear, catchy title containing “type xyz compounds” and using “reagent abc” words, so it was easy to cite.

  3. Isidore says:

    A few years ago I was reading a paper that made a reference to another paper, the latter written by a former colleague, in a way that distorted or misinterpreted the latter’s findings. After rereading both to ensure that I was not missing or misinterpreting something, I contacted my colleague and brought this to his attention. He agreed with my interpretation and was puzzled so he contacted the authors of the referring paper to inquire of the discrepancy. He told me later that the first author (who was not the corresponding author) told him in private when they met at a meeting that because the authors’ institution did not have a subscription to the journal in which my colleague’s paper had been published they never got a copy of the paper, but because one of the reviewers had requested a reference in the particular section of their paper they had to come up a citation quickly. And just like the Hawthorne Effect paper, the title of my colleague’s paper seemed appropriate and did not convey the discrepancy between the claims in the two papers, so it was added to the references to placate the reviewer.

  4. Ian Malone says:

    One paper I’m on evaluates modifications to the standard method for a particular task, one of which is (was) our new way of doing it, against the most appropriate one of the standard ways provided by the toolbox we used. Needless to say our method was better (otherwise no paper! selection bias?). Of course we outlined the differences between the various approaches and throughout described the previous one as ‘standard-X’. As these were modifications within original framework method X was heavily discussed throughout.

    I’m aware of at least one subsequent paper that references our paper as the source of the method they chose to use, and it’s not the one we suggested…

  5. Josh says:

    I’ve referenced a synthesis method before too, where it was to “Bs*% et al., Journal of Obscure” because the method actually sucks. The yield was like 15% and the workup was a mess. But it was a starting material I needed for later steps. I didn’t want to waste any more time to develop anything better, on top of the already wasted time and huge quantity of solvent and column chromatography resin wasted in working the mess up. Maybe I should have noted that the cited procedure was crap.

  6. Anonymous says:

    The problem you describe regarding organic synthesis has an equally-frustrating analogue in molecular biology. Say you want to study or build upon a protein/gene in a similar manner to a paper in the literature. If you can’t get a plasmid directly from the source, there is often no way to obtain the sequence information for the plasmid or gene. The paper you are referencing will say something like “pBlargeblarg was obtained as a gift from Dr. Yadda (ref. 1) and modified using XhoI and EcoRI”. You go back to Dr. Yadda’s paper, and it says “pBlargeblarg was obtained from Dr. Whosit (ref. 2)” and ref.2 is from 1976 and Dr. Whosit died sometime in the 90s. I’ve sometimes resorted to typing out the A’s, T’s, C’s, and G’s directly from a figure in a type-written paper if I’m lucky enough to stumble upon such information.

    This is why I’m a huge proponent of all journals that publish any sort of molecular biology requiring all plasmid sequences be deposited in addgene, and accession numbers to be published in the paper. Otherwise it’s too difficult to reconstitute experiments and reproduce the data.

    1. Sacred Bovine!!! says:

      THIS!!!

      …and not just with plasmids!! In 10 years I’ve worked with three mutant mouse models from which I have sequenced the reported genetic modification only to find that these were not the mice I was looking for. Sadly, some of this sequencing was only done AFTER we had spent months trying to replicate a phenotype that just wasn’t there.

      It only took 2 references to find the founder of my latest foray into in vivo work. However, I had to follow 5 more references through 15 years in academic labs before I found the investigator who donated the line to Jackson Labs (our source) for cryo preservation. By my math, that means that these mice were possibly (more than likely) inbred for about 20 generations by some (lazy / overworked / abused?) grad students before their embryos went into lN2 for posterity. They certainly have the mutant allele I’m looking for, but they have been inbred so far from a B6 mouse that I figure they’re probably only 10 more generations shy of the second coming of the dodo.

      Here’s my PSA. If you’re skeptical that a bottle actually contains what it says on the label (as you should be), you should take an NMR and run LCMS. While you can’t put a mouse in an NMR (pretty sure that would upset IACUC and get you banned from the core), you should probably think of it as a living reagent that needs to be subjected to sequencing, backcrossing and QC that is every bit as rigorous, if not more so, than what you apply to your chemistry.

      I don’t always read the literature, but when I do, I remind myself that the majority of it is irreproducible garbage. Let’s all please do better in future.

      1. Someone says:

        The Max Planck Institute around the corner from my university actually has a 14.1 T MRT (600 MHz!) for animals up to rat size, so putting a mouse in to identify whats in it might even work.
        Possibly an interesting PhD project?

        1. GKA says:

          Someone beat me to it! At my previous institution, they had a wide-bore probe custom built for doing NMR on rats and mice. Regarding animal welfare, our professor shrugged and said they simply loved hanging by their teeth…

  7. Project Osprey says:

    Points 1 and 3 are major problems for Wikipedia’s chemical editing community. References sometimes seem to be added on the basis of title or abstract because editors don’t have access to the relevant journals (or just can’t be bothered reading). Deleting references which appear appropriate, but in fact lack relevant details, can be tricky as it’s often flagged as vandalism and revoked.

  8. JustAnotherPostdoc says:

    The best (or worst, depending how you look at it) is when it becomes circular. Some rabbit holes have no bottom. Just 3 groups, who all publish close together all of whom reference the other two for making a substance and promise further details in a followup publication which has never happened. All subsequent publications link back to one of them and the original authors all passed on decades ago.

  9. Random PChemist says:

    A wrinkle on this is a paper which references “to be published” work by the group. Ten years later, it still hasn’t been published. In one case, I was sufficiently interested that I looked up the first-author student’s dissertation. The information wasn’t there, either.

  10. loupgarous says:

    Laughed out loud at this:

    “In that last category I put a special irritating feature of the synthetic organic chemistry literature, one that every bench chemist sees coming before I reach the end of this sentence. I refer to the nesting-doll method of referencing the preparation of some compound: instead of telling everyone how you made it, you just say that it was prepared by the method of Arglebargle, reference 15. So you go look up the Arglebargle paper and find that they don’t tell you how to make the damn thing, either, but refer you to Dingflinger et al. in the even earlier literature. I have had the Dingflinger-level papers themselves send me to yet a third reference, by now something written during the Weimar Republic and of course containing the finest spectral characterization data available in 1931, which ain’t much. Would-it-have-killed-you-to-put-in-the-procedure-and-the-NMR-data, etc.”

    I share your anger with gaming of the process of citing previous work in scientific publication. Google Scholar and other automated measures of impact on an author’s field are directly responsible for this, because Google just makes it absurdly easy to pad that measure of scholarly impact. You just get as many people as possible to cite you in their papers.
    Google Scholar’s h-index doesn’t even reject citations in work published in predatory journals, which was a problem I faced in wikipedia trying to get an article on someone who’s been described as the dean of fringe physics deleted from the encyclopedia, not just for the moral issue of not misleading our readers about how notable this guy is, but for the sake of the wikipedia project’s reputation.

    But during the “articles for deletion” discussion, the guy’s Google Scholar h-index came up as proof Professore S. was notable enough for a wikipedia article (and that is indeed an ethic in wikipedia).

    Il Professore just published to “The American Journal of Modern Physics”, another fine product of Science Publishing Group, which recently changed its formal domicile from a nonexistent address on New York’s Fashion Avenue to two floors in Rockefeller Center, just as bogus, but is actually published in places like the Sudan and Pakistan.

    Only ONE member of the current editorial board of “The American Journal of Modern Physics” lives in the New World, so the “American Journal” part of the journal’s title is as full of crap as its scholarly contents. The virtue of this venue is that for a modest fee, anyone can be published, and cite one’s own work countless times, as well as log-rolling with other authors, citing irrelevant gradoo from them so they’ll cite you. They also offer, for an additional modest fee, positions on their Editorial Board. But, of course. Then Google Scholar counts those citations, and there you go, “impact on your field”.

    Google’s slogan may be “Don’t be Evil”, but between their tattling on dissidents to the Chinese secret police and publishing a largely worthless, but widely accepted measure of researchers’ impact on their field, Google’s got some ‘splaining to do.

  11. Thoryke says:

    I’ve frequently found forms of citation abuse in my editorial/writing support work — the lowest hanging fruit are the citations of older literature that someone has just dropped in because those were the citations from their thesis or their advisor’s thesis. Sometimes those old references still hold, but often not, and it’s important to check.

    Another problem is when people are citing a paper because of something in the paper’s Introduction [which is really just another literature review, leading to the Arglebargle problem discussed earlier] rather than something that the paper demonstrated. I want the original research cited, not the “Whisper Down the Lane” version.

    What I really hate, though, are blithe citations of obscure literature that goes against current knowledge, e.g., using the flimsiest of predatory journal fluff to insist that x or y antioxidant combination is going to have immediate applications to the prevention of some disease…..and then when I go digging to confirm what the fluff really says….I discover _that_ article has cited a different article which contradicts the whole line of argument.

    Part of the problem is people grabbing for factoids, rather than reading arguments, weighing the quality of sources, and allowing their readers to benefit from that work. [Of course, such behavior isn’t restricted to science or theology….]

  12. arseniclife says:

    Every once in a while, it’s fun to check the citations to the arsenic life paper. Many cite it in the context of discussing the integrity of science, but you can always find a few citing it with a straight face. Here is one from NASA (!) in 2018:

    “An example of a compatible feature would be the ‘‘arsenoDNA’’ suggested by Wolfe-Simon et al. (2011).”

    https://www.liebertpub.com/doi/full/10.1089/ast.2017.1773

  13. MG says:

    Derek, shouldn’t that be ‘Those are at least honest’? instead of ‘Those are least are honest’.
    Not here to find your mistakes but just thought that you might want to correct.

    1. Derek Lowe says:

      So it should! Straight typing mistake, fixed. Thanks!

  14. Anon says:

    Chememes?

  15. Ongo Gablogian says:

    What about the nesting doll effect when you get to the 3rd paper and then it’s written in German or Japanese?

    1. Nameless says:

      The very old german papers are beautifully written. Consider it a gift from the gods that smile upon you.

    2. achemist says:

      if its a synthetic procedure its always a godsend when the original paper is a soviet union or oldschool german paper.

      You probably need to heat in refluxing neat sulfuric acid or smth similar but they tend to work beautifully.

    3. NoLongerSynthing says:

      During my PhD research I had that exact problem. Without naming names, the synthesis of an occasionally used but important when needed reagent was reported in a scanned paper from the 1980’s in Japanese and none of the modern papers would report how the actual synthesis happened. However the professor would gladly send you some material if the student who made in would be put on the paper. It drove me batty in grad school because I felt I had to ration the material or else go begging back for more. It may be cynical of me but it felt like the nearly unpublished nature of the synthesis was just a ploy to boost the groups papers.

  16. CR says:

    I will say, the only defense to the “we made the compound via reference X” would be due to page limits. Why there are page limits for online only journals (ACS Med Chem Lett, I’m looking at you, for one). But, if one is up against the page limit I see why one would just add the reference and not have to have another scheme and paragraph. Although, one could always reference the paper, but still show the synthesis in the supplemental material.

    1. anon says:

      Because no one likes to pay more taxes for no reason

  17. Self-promoter. says:

    And then there are citations for “facts” that aren’t actually in the literature. We went looking for the origin of the oft-cited statistic that 90% of cancer patients die of metastases. It turns out it didn’t exist, so we put in the effort to at least assess the fraction of patients that die with metastasis. Identification of the resulting paper is left as an exercise for the reader so as not to make the self-promotion too egregious.

  18. Peter S. Shenkin says:

    @CR I agree that page limits may be part of the problem. I have a vague recollection that back in the ’60s, when doing (shudder) organic chemistry at my undergrad institution, journals complained that you wrote out a full experimental section instead of just citing a reference. I do wonder whether (shudder) AI could be brought to bear to read these chains of references and determine whether any of them actually describes a synthesis.

    It also seems to me that “better journals” ought to create an editorial policy that any reference cited in an article must *directly* describe the matter that the reference is being cited for. For one thing, then the (shudder) AI methodology would only have to look down one level, and the same goes for reviewers.

    Just a thought: Google Scholar might consider instituting a category of spam journals, and give users the option, as gmail does, of including or excluding spam in a search. Users could create their own lists of spam journals to blacklist, and perhaps also whitelists that would override Google Scholar’s spam decisions, should Google indeed accept this proposal.I know that some citation-based quality measures take into account self references, but I don’t know whether Google Scholar does.

    Finally (aren’t you glad I’m almost done? 🙂 ), although there is controversy around the question of whether a Hawthorne effect was really observed at Hawthorne, and even if we are all in agreement that it wasn’t, it seems reasonable, since everybody knows what the phrase means, to use it for other situations where one believes such an effect genuinely may be operating. If I say, “I believe that these observations are best understood as a Hawthorne effect,” it matters little whether the original Hawthorne observations demonstrated such an effect or not. Whether such an effect is real or not in other situations all comes down to cases.

    1. anonymous coward says:

      I’d be happy if papers told a complete story. Lots of the evidence supporting the conclusions of a paper is relegated to the SI; particularly when the authors spend two pages massaging/(choose your unsavory metaphor here)/slapping the authors of previous work in the field (or telling me the horrors of DCM as a solvent or t-BuO2H as an oxidant instead of Tl(NO3)3 or stoichiometric silver), and when they actually have the room to do so, it seems sort of bothersome. If stuff has to be trimmed to fit page constraints, it should be the intro (“Lots of people have done this before, but their methods (1-101) are not the greatest, like ours is.”) and references (small print? if the print has to be that bad to fit the reference list, put a large print copy of the reference list at the beginning or end of the SI), and maybe optimization, not the key points of the work or the logical story delineating it.

      It’s good to be complete in explaining previous work leading to yours, but long reference lists make it difficult/impossible for reviewers or editors to look at references. It might also reduce some of the “You need to cite my/my journal’s previous (maybe irrelevant) work on this topic” requests in order to pass review.

      1. anonymous coward says:

        …if authors/journals trimmed back requests for citations some.

  19. matt says:

    A second comment. If you read the Talk section of the Wikipedia entry on the Hawthorne Effect, it is itself a case study par excellence on how the myth, alongside its fighting partners sloth and lack of thought, triumphs over truth. Multiple entries on “factual inaccuracy” reversed by editors who cite a secondary source (textbook, perhaps by their professor, perhaps by a luminary in their field) which exactly repeats the myths, the non-facts about what the original data were.

    Meanwhile, the debunking work, which uses the original data, and includes copious references to specific tables in the primary sources, is treated as a dissenting opinion.

    Further down, you see where a dozen or more references, each questioning whether the original Hawthorne effect existed or whether the Hawthorne studies have been mischaracterized, have been edited out of the page unless someone can show how they are relevant.

    It seems likely that at times in the past, the wikipedia entry was more accurate than it is now.

    What do they teach kids in school these days lol? Certainly not the importance of going back to primary sources to avoid a mischaracterized summary of a summary of…

  20. Chemical biologist says:

    Reminds me of the MDA 435 model cell line. These cells were originally thought to be a breast cancer line but turned out to be melanoma. The misclassification was discovered over a decade ago yet a quick google search shows studies still using these cells for breast cancer research as recently as this year.

    https://link.springer.com/article/10.1007/s10549-006-9392-8

    https://pubs.rsc.org/en/content/articlelanding/2019/dt/c9dt00335e/unauth#!divAbstract

  21. Marcus Theory says:

    I always hate it when I see a synthesis citation going to someone’s thesis. Good luck digging out most PhD theses.

    I hate it even more when I see something like “for a description of the synthesis, see our upcoming paper in [journal.]” All too often the Dingflingers and Arglebargles of the world only *submitted* that paper, but it didn’t end up getting published.

  22. Anonymous says:

    I thought I’d chime in peripherally, at first, to the topic. I did hundreds of protein assays as an undergrad and mostly used a paper that was an IMPROVED Lowry assay (>300,000 citations to date) but I think we mostly cited Lowry, not the improvement paper. And Lowry was ‘just’ an improvement on a Folin-Ciocalteu procedure. As Lowry, himself, admitted, the popularity of his procedure was partially attributable to its being promoted in papers, seminars, casual discussions, with some Big Names in the field (Sutherland, Kornberg, et al.).

    I would also tie that in with Djerassi naming the Birch Reduction the Birch Reduction in seminars and conversations. I think that Birch, himself, acknowledged the prior work of … I forget … Stevens? Godfrey? Wooster? … and that it was Djerassi who helped to put HIS (Birch’s) name on the map. Too bad for Stevens, Godfrey or Wooster.

    I think (can someone please do an updated citation count?) that THE most highly cited paper in organic chem is Still’s Flash Chromatography paper (JOC, 1978). When checking the citation count, keep in mind all the citation ERRORS. I have seen cites with the wrong name (C Still, WC Stille, etc.), date (1979, etc.), pages, etc.. You have to check all of the “typo” cites to get an accurate count. But then there are those who cite Still but are NOT doing flash at all. I have seen researchers pouring stuff through columns but it is NOT flash as described in the JOC. And there’s another rub. If Still had tried to publish in J Chromatography or J Chrom Sci or some other separation science journal, the paper would have been rejected for incorrect nomenclature and equations (resolution is NOT the ratio of retention time to peak width) and for not citing relevant precedents (e.g., van Deemter). The brilliance, if you will, was publishing in JOC and to write to his targeted audience, organickers who knew what was described, even if some of the terms or equations weren’t textbook. I think that Flash probably has over 20,000 cites if you count all the typos.

    And typos brings us to citation propagation by simply copying others’ bibliographies without reading the papers themselves (getting closer to Derek’s topic). I believe that citation typos are what helped to trap Paquette (and others) when they plagiarized proposals from others.

    Also to Derek’s topic, I can think of some org synth papers that have been exposed as erroneous but are still cited, nonetheless. E.g., this alleged lysergic acid synthesis, Org Lett, 2004, 6(1), 3-5, was shown to be impossible (i.e., fraudulent) by Org Lett, 2012, 14(1), 296-298. Nevertheless, LSA papers and reviews (post-2012) still cite Org Lett 2004 as a legitimate synthesis, which it is not.

    1. Ian Malone says:

      On the subject of citing without regard to the wider literature, I’ve checked and https://fliptomato.wordpress.com/2007/03/19/medical-researcher-discovers-integration-gets-75-citations/ is still generating new citations, such as, “The incremental area under the curve (AUC) for plasma glucose and insulin during the OGTT was calculated with the trapezoidal rule (17).” published in an OUP journal in 2018.

      Yes, that’s the tapezoidal rule, about which wikpedia has this to say: “A 2016 paper reports that the trapezoid rule was in use in Babylon before 50 BC for integrating the velocity of Jupiter along the ecliptic.”

  23. Anonymous says:

    Regarding citations:
    http://phdcomics.com/comics/archive.php?comicid=1108

    And, my wife’s example:
    She did a study according to a procedure given in the paper A (which, in the key step, requires keeping the precipitate). The paper A cited the paper B, that also provided procedure, but with completely opposite instruction (keep the supernatant). The paper B cited the paper C (the original procedure), which instructed to keep the precipitate. Apparently, since authors of both B and C were non-native speakers, they misapprehended the cited papers; however, in paper C, two errors cancelled each other.

  24. John Smith says:

    Citations do not represent how useful or impactful a paper is, merely how popular it is. I myself have a reasonably well cited synthesis paper where >90% of the citations are referring to it as an example in the field, very few cite it as being used.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.