Skip to main content
Menu

The Dark Side

Cite My Papers. Or Else.

The ways to mess around with the peer-review process are legion, but these schemes are getting a bit easier to catch. That’s what I take away from this paper, from two bibliographic scientists at Elsevier who set up a system to do just that. One hears tales of reviewers who will look more favorably on your manuscript if you cite some of their own work (although I can say that I haven’t run into this behavior myself, fortunately). And now these folks can be tracked down by the signatures they leave in the citation record.

That paper will also take you to some examples of other behavior such as “citation stacking” (where a journal strongly encourages authors to cite other papers that have been published in the journal, so as to raise the title’s own database ranking) and “citation cartels”, a more elaborate version of the same idea, where several titles collaborate to cite each other and artificially float everyone’s boats. But the new work they describe comes from the study of 69,000 individual reviewers for Elsevier journals (over the period 2015-2017), and it uncovered some interesting patterns.

That’s especially apparent when you are able (as these authors are) to compare the first-submitted versions of papers coming into Elsevier journals with their final form – that is, looking directly at what are probably changes requested by reviewers. This is still a nontrivial effort; the paper goes into detail on how they worked through the various databases to make sure that they were talking about the same manuscripts and authors all the way through. It turns out that there were about 54,000 reviewers who met the cutoffs of having published 5 or more paper themselves and also having reviewed 5 or more publications, of which at least one cites the reviewer.

Now, citing a reviewer is not prima facie suspicious. After all, you’re supposed to send submitted papers out to people who know the field and are working in it, and they may very well have published papers themselves that are worth citing in such new manuscripts. But there are limits. When these authors examined how often that happens, per reviewer, there was a weird and obvious outlier set of about 1600 reviewers for whom every single paper they ever reviewed cited them in turn. A histogram shows a peak where this reviewer-citation situation happens about one out of every five times for a given reviewer, and the number of instances gradually decreases from there. By the time you get up to (say) “95% of the time the reviewer was cited in the manuscripts they reviewed”, there are almost no instances at all. And then suddenly there’s a big lump of such cases where it happens a flat 100% of the time. It doesn’t look good.

And it isn’t. Comparing the first-submitted versions of the papers under consideration with the final ones (in a smaller set of , there were hundreds of cases where the majority of those reviewer citations were not even present in the original manuscript. The authors estimate that this happens in about 0.8% of the total manuscripts submitted – and you can take that several ways. You can be glad that it’s as low as that, but remember, this net was set to catch only the most persistent and blatant forms of this behavior (stuff like this). Lower-level citation extortion wouldn’t have been flagged by these cutoffs at all. And as the authors note, this sort of thing may not even be noticed by authors for what it is, since the reviewer reports are anonymized. And even if they do see what’s going on, they may see this as a minor payment to get a favorable review. As the paper says, “In one case where a reviewer had submitted 120 reviewer reports including requests to add multiple irrelevant citations in each report, only four authors refused to add the citations during revision of their paper

So I would take this estimate (the first of its kind) as a likely lower bound. If it leads to the identification of the biggest abusers of the system (and subsequent banning of them from reviewing papers), well, good. But there’s probably a lot more of this sort of thing being done more quietly. . .

44 comments on “Cite My Papers. Or Else.”

  1. Tom says:

    What about those reviewers who will attempt to block the publications of their competitors regardless of if they were cited? You hear the stories of people with papers turned away a few times from a journal until they ask for someone specifically to not be a reviewer, only for it to then sail through…

  2. Delph says:

    I’ve seen exactly this with a paper I submitted as a grad student – the “anonymous” reviewer came back and asked for 5 or so references to be added, all from the same group. Wasn’t difficult to figure out who the reviewer was.

    There is also the case where you intentionally cite people you think are likely to be reviewers – they’ll immediately look upon your paper more favorably if they’re cited in it. So pre-emptive citing in that case.

    1. Curt F. says:

      I have reviewed papers where I asked the authors to cite several papers from the same group (maybe not five or nine of them though), a group that I was never affiliated with.

    2. anon says:

      Anon, because it’s not really my story, but a PhD student from our group submitted a pretty good paper. Two reviews came back, one reasonable and with constructive comments and request for more data, the other with an attack on their “scholarship” for omitting several papers, which were not particularly relevant to the work, but were all from the same group. Not a group lacking in appropriate citations either.

      Imagine being that student and the impression it gives of academia.

      We folded on the “discretion is the better part of valour” principle, but their supervisor and I both made sure the student knew what we thought of that review.

  3. anon says:

    I had a reviewer asking me to cite 9 papers from his group for a JACS revision. I ended up doing it, but f that guy.

    1. MTK says:

      I would have done the same thing. If someone wants to pad their stats, I sort of shrug and move on. Maybe I should care more in the name of scientific integrity, but to be honest, it’s not worth the hassle.

      It’s like golf. If I’m playing with someone who blatantly breaks the rules, i.e. a foot wedge or something similar, it annoys me, but not enough to call them out on it. If it’s a minor infraction, I don’t care at all. It’s on the individual and their conscience, imo.

  4. cynical1 says:

    So, is this behavior more prevalent in academia versus industry reviewers?

  5. Peter Kenny says:

    I recently experienced the opposite problem when I submitted ‘The nature of ligand efficiency’ to J Med Chem. Two of the reviewers were most insistent that certain seminal contributions to the scientific literature should not be cited. One of the great things about preprints is that we don’t have to take this sort of shit from reviewers any more and, not wanting to deprive J Med Chem of feedback on the review process, I reviewed the reviewers in the blogpost which is linked as the URL for this comment.

    1. vB says:

      Quite a few 1-1 draws in my time, even a 2-2 draw after extra time, then on to penalty shoot out but game still ended 2-2. Replay won 2-1 on a different neutral ground. Also a 0-0* draw, oppos and ref never even turned up, waved through to next round, played at Middle Bog UserSanitaryware United FC. Pass the Harpic, Captain. Match abandoned at half time due to waterlogged pitch.

      Enjoyed link to LE re-debunkering. Reminded of long lost office where long lost office Python used to strut his stuff, fist held out straight like a Dalek – LOWER LOGD… MAINTAIN POTENCY… IMPROVE SOLUBILITY… MAINTAIN PERMEABILITY… IMPROVE METABOLIC STABILITY.

      Not much that matters changed that much since then. Verily zany days indeed.

  6. C. Hall says:

    >>One hears tales of reviewers who will look more favorably on your manuscript if you cite some of their own work (although I can say that I haven’t run into this behavior myself, fortunately).

    How do you know about this?
    They are not going to state in their report that “since you cited my super-relevant work , this manuscript is great and should be accepted for publication”.
    It’s a more subtle bias, but for sure it has affected countless publication decisions…
    Humans are subjective beings by nature. To different extents, we cannot be 100% objective, specially when it comes to something that directly affects us.
    Making other people working on exactly the same subfield of science review manuscripts is a double edged sword.

  7. The Iron Chemist says:

    Here’s a comment I got from an article that was rejected without review:

    “If your [manuscript] is truly of broad general interest to the general readership of JACS why is there but only one reference to JACS?”

    The paper was subsequently accepted by Nature Chemistry.

    1. Peter Kenny says:

      JCIM declined to review a manuscript of mine on prediction of alkane/water partition coefficients (subsequently published in JCAMD) and it is possible that it fell foul of:

      “A) fall outside the aims and scope of JCIM, sometimes characterized by few if any references to papers in JCIM or other mainstream journals in computational chemistry”

      I blogged about this and some of the correspondence with JCIM is reproduced in that post which is linked as the URL for this comment.

    2. NMH says:

      From JACS to Nature Chemistry. Nice humble brag there…not unexpected from our esteemed faculty. *cough*

  8. Isidore says:

    Maybe papers could be submitted to reviewers without the actual citation, although in the text it would be indicated where a reference is cited. The editor will be responsible for ensuring and assuring the reviewers that appropriate references are included.

    1. Athaic says:

      Wouldn’t work. To start with, the editor doesn’t necessary have the skill and certainly not the time to verify the accuracy of references. That’s precisely why he is passing this work onto reviewers, after all.

      Moreover, the point of a citation is to indicate another work on which your own work build upon, or contradicts, or something. To paraphrase Newton, every scientists stand on the shoulder of giants.
      It would become a bit complicated to review the paper if you cannot verify that the author is quoting another paper correctly.
      I once reviewed a paper from a student from a group well known for advocating This way of doing things, with a concurrent group advocating That way. Of course, the student included one citation which was supposedly bashing That way. Turned out, the quoted paper was about the issues coming with an obsolete technique, and more modern techniques, while not lacking issues on their owns, didn’t have the issue the student was claiming, as they have been precisely designed to address that issue.
      To make matters worse, at some point an additional reference was inserted in the text and the authors failed to update correctly the reference numbers. I spent half a day tracking a paper from the 60’s before realizing the paper I wanted to check was the one after in the reference list.

      1. NJBiologist says:

        I haven’t had that experience, but I’ve had several MSs to review where the authors cited papers for support–but the papers didn’t support the assertions in the MS at all. If flagging sloppiness is to be part of review, the citations need to be in there.

  9. MrXYZ says:

    Could journal editors require reviewers to justify why they are asking the authors to include a reviewer citation? This is not something that would be seen by the authors but just by the editor. It some cases it could be a legitimate request but just by asking the question you could potentially reduce the bad behavior.

    1. Curt F. says:

      Yes, editors can do whatever they want but they prefer to do as little as possible nearly all of the time. If journals are to have a meaning in this emerging preprint-based world, it seems like editor quality would be an important determinant of perceived journal quality. I hope for a future world of activist editors who play much stronger roles in the review process. I’m under no illusion that all such editors would be free of biases, but the difference is that authors (and readers) would know who is responsible, i.e. who to blame (or credit).

      1. T says:

        “editors can do whatever they want but they prefer to do as little as possible”. In my experience, they try to do as much as possible given some pretty severe time/staffing level constraints. Another thing to bear in mind is that people are very angry when we fail to catch misconduct and assume we just can’t be bothered to deal such issues, but you don’t see the many cases we do catch and address (which typically goes on behind the scenes due to libel/confidentiality issues). I actively deal with cases of unethical behaviour on a weekly basis. I think think people would be shocked if they realised just how common unethical publication practices (from citation/authorship issues through to data fraud) are. I certainly was.

  10. Anon says:

    I get the impression that these pointless and pathetic issues and debates over authorship and citations are deliberately propagated in academia to distract from the fact that the salaries are so shit.

    1. Anon 2 says:

      Reminds me of how children have nothing better to worry about than how many “likes” they get on Facebook. I agree, it’s pathetic.

    2. loupgarous says:

      I see it as the quest for acclaim and recognition in demanding fields.

      As Anon 2 says, the appetite for being seen as having impacted other workers in your field is fed by what is esentially social media – in this case, Google Scholar’s h-index, which measures how many people cite a researcher’s work and how many of that researcher’s papers are so cited.

      You build a good h-index in one of two ways – either do ground-breaking work which other people cite as informing their own work, or agree to cite people and be cited for reasons other than the quality of the work being cited.

      The name of the game is acclaim. Scientists who deserve the acclaim get it by working, not politicking. I wonder seriously how many people in grantor agencies, academic research, and industrial research take Google Scholar’s H-index seriously.

    3. ex grad student says:

      LOL. Trust me. Everyone is very aware of the shit salaries.

    4. anon says:

      They’re not that pointless when your funding can depend on impact of your publications. (The alternative is it depends on the impact of where you publish, which is not good for other reasons.)

      On the positive side for requiring appropriate citation, or surveys of the literature, you also see papers where the authors have ignored work that doesn’t make theirs so novel, would illustrate flaws or shortcomings in what they’ve done (and I’ll believe they could be genuinely unaware, so a reviewer bringing things to their intention is trying to help them too), or even their own past papers that showed something different.

  11. MoMo says:

    The mere act of publishing brings out pathetic behaviors on all levels and sides. Publishers, authors, indexers, readers and the especially weak and enfeebled academics clinging to slim hopes of notoriety.

    Citation Indices mean nothing and Eugene Garfield, the father of the Citation Index, will come and haunt you and drag chains around your house screaming like a banshee if you people keep it up.

    He hated pretentiousness and deceit.

  12. Anonymous says:

    In departments where profs are senior editors of major journals, it wasn’t unusual for them to give mss to qualified grad students / post-docs to referee. This was partially grooming or training for students to learn about the process, it lightened the load on external referees, it served as a check on external referees, and, IMO, grad students were very serious about their refereeing and provided some of the most thorough and toughest reviews. At other times, some Profs who received mss for refereeing would pass along the paper to select grad students or post-docs to do the work that the prof would check over and sign off on as their own. (Technically, a designated referee giving the ms to a junior associate might be a breach of presumed confidentiality between the journal and the addressed referee. I have heard of cases of theft of ideas from that and similar transfers, the most famous being Paquette blaming his post-doc for stealing ideas from a third party’s unfunded grant proposal of which Paquette was a referee).

    I’ve also had the misfortune to have been involved (as a co-author, not a principal) with other editorial – refereeing hanky panky that revealed the more sinister side of the publish-or-perish business.

    As a former referee, I quite often requested the inclusion of relevant citations to prior art but never my own work. Some of these requests for citations were on the borderline of accusing the submitters of deliberate deception (or plagiarism) for FAILING to include such important precedents.

    And a few more stories, previously told In The Pipeline. (1) AJ Birch was talking with Woodward about the demands on his time, including time spent refereeing papers. RBW asked why he does that. AJB said as “paying back” for all those that referee his papers. RBW said, “My papers don’t need to be refereed.” 🙂 (2) RBW received referee reports back from an editor and a cover letter asking for revisions. RBW resubmitted the ms without any changes. It was published. (3) A grad student reviewer showed me the latest Tet Lett with a paper by EJC that he had carefully refereed. It was the original ms, unchanged. NONE of his comments had been addressed at all, not even the typos! Presumably, other referee comments were similarly ignored.

    1. W Waltman says:

      THE PRIZE WE SOUGHT IS WON

      I never worked near any
      High end high-profile high-pressure
      Groups like… Oh you’ll know who they were,
      Or any other “Fame is the Spur”
      Group you could name.

      But heard tales in the Long Lab,
      And at College Bars
      And at dinner tables.
      “Downed a whisky for every
      Postdoc cited in the talk…”

      Which leads to a thought experiment,
      A question too pointed to air out loud,
      As no wish to talk ill of the Dead
      Or be heard making
      Unfounded allegations.

      Pressure people to get Results,
      Necessary of course to a degree,
      People who take in pressure more than
      Results, and results obtained for free
      Take it from me, you’ll surely get your share.

      Good Captains value good soldiers
      As Soldiers, as much as for soldiering.
      Good Soldiers value good Captains
      For Generosity of Spirit,
      As much as for efficiency.

      Otherwise danger of… “Man hands on misery to man.
      It deepens like a coastal shelf.
      Get out as early as you can,
      And don’t have any students yourself.”

      Much better…”O Captain! my Captain! our fearful trip is done,
      The ship has weather’d every rack, the prize We sought is Won…”

  13. loupgarous says:

    In wikipedia, I’d initiated a Request for Deletion of an article on a physicist whose claims almost everyone claimed were fringe science. I didn’t think he was notable enough to have his own wikipedia article. I was overruled during a long discussion partly because the article’s subject, other fringe and just plain undistinguished scientists had large Google Scholar impact ratings, so that within their groups, these people had the appearance of “impact within their field”. Their “field” being something other than science as most of the professional scientific world understands it, but it was notable enough for the scientific author in question to have a wikipedia article for being a notable fringe scientist.

    How’d this happen? The same way as the deep current ot citations in the paper Derek shared with us, in which papers are cited in serious academic work for political reasons. In both the world of fringe science and serious academic work, the number of a paper’s citations is increasingly debased currency. Google Scholar needs to stand up for its share of the blame, because all it really does is count how many citations authors have in other researchers’ work – then falsely advertises the result as “impact within one’s field”.

    That even researchers involved in serious academic work are being seduced into the kind of log-rolling that people who write screeds criticizing quantum chemistry in order to get synthetic acclaim for their irreproducible and even bizarre theories is downright scary. If Pons and Fleischmann had published today, they’d have found plenty of people willing to cite their work in exchange for their doing the same, all parties gaining impact in their field. If Google Scholar says it, it must be true.

    Like the critic of quantum chemistry whose prime claims to fame are the discovery of href=”http://www.santilli-foundation.org/docs/magnecules-2017.pdf”>”magnecules” in a process gas consisting largely of carbon monoxide, accusations that Nobel laureate Steven Weinberg and a cabal of others suppressed his “novel” body of scientific work, and selling telescopes with concave lenses which allow their users to see “invisible terrestrial entities”, more plausible scientists who want fame and “impact in their field” now can get it just by trading citations in their papers with other researchers who are also tired of doing research and proving things to be true (or not true) the boring, old-fashioned way.

    Not a shining moment for scientific publication.

    1. Skeptical says:

      “He has complained that papers he has submitted to peer-reviewed American Physical Society journals were rejected because they were controlled by a group of Jewish physicists led by Weinberg.”
      That tells you everything you need to know.

      1. loupgarous says:

        The case I mentioned above is a look at the train wreck at the end of the road.

        Automated assessments of impact on one’s field beg to be gamed. Any number of people can play them just by colluding to cite works which had no impact at all (or glancing impact) on one’s work – in exchange for publication or other consideration.

  14. luysii says:

    The other side of the referee coin, is that the referee’s criticisms effectively being ignored by publishing the mss in another journal. A old friend, an emeritus prof of chemical engineering (unfortunately now deceased) referees a lot of papers. He estimates that 80% of the papers in his field, quantum chemistry, coming from China are absolute trash. According to him China gives bonuses to people getting published in high impact journals. What he finds particularly appalling is that he writes up a detailed list of corrections and improvements for the paper, and then finds it published totally unchanged in another journal.

    1. Anon says:

      >>According to him China gives bonuses to people getting published in high impact journals.

      I can second that opinion. We had the senior editor of Angewandte over at our university for a lecture and he said that for every paper published in JACS (or around that impact) the work group would receive around 5k dollars from the Chinese government. Furthermore the acceptance rate of manuscripts coming from China is only 10% (in Angewandte) while ms from the USA have a rate of 30-40% (depending on the field). This shows that nearly every Chinese manuscript is submitted to high impact journals regardless of the “actual quality” of the research.

  15. Dr. Wood says:

    I have reviewed papers that clearly miss major publications from people I worked with that are clearly relevant, and would contribute significantly to their literature review. I feel that it is worth pointing out “hey, you missed these papers / resources that would contribute to your understanding, you might want to consider reviewing them, and see if they contribute to your lit review.” You can suggest improvements without being a jerk about it. Also, very different than “you must cite these 5 papers of mine for you to publish.” Weird gray area, and not a clear solution in sight.

  16. KOH says:

    Few things are as frustrating as seeing a newly published paper in a high impact journal that closely resembles work that you’ve published 5-10 years prior and yet fails to reference your seminal contributions to the field. This is particularly vexing when said publications are full of the such hype words as “first” and “novel”. I have reviewed several (>5) papers that were directly related to my own published work but failed to cite any of that prior art. Fortunately, the editors noted the similarity and sent it to me for review. In these cases, I have no shame in requesting that the authors cite the most relevant publications from my group (and missed citations of other relevant contributions in the field from other groups). But, I would never require that an author cite any of my work if it was not directly related to the submission.

    1. NAOH says:

      Probably because it wasn’t all that important if people continue to not cite your work.

  17. Bob the Chemist says:

    We once submitted a paper (on peptide structure) that contradicted the leader in the field. Guess who reviewed it? His demands were so bad we eventually gave up….(we would have had to rewrite the entire paper to say he was right and we were wrong…..)

  18. Vader says:

    It’s perfectly legitimate to point out that the authors of a paper you are reviewing seem to be oblivious to previous relevant work, even if it’s your own work. That’s one purpose of peer review.

    It’s a gross abuse to hold a paper hostage to its authors’ willingness to include citations of marginally relevant work in which you have an interest.

    Neither author nor reviewers can be expected to make a disinterested decision on where the line between falls. This is a significant weakness of peer review. That judgment has to be made by a third party, such as the editor seeking reviews, but I am dubious that happens very often.

  19. Scott says:

    Man, that raises my hackles, to the level of “strip their PhDs and fire them from their jobs for fraud.”

    That may be a bit extreme, but blatant BS like that *really* upsets me.

  20. O L d'Phogey says:

    CITATION INDEX

    Last time I looked,
    Seminal paper on my PhD work
    Had amassed 4 citations in 35 years.
    Never got asked to review related papers,
    Never been any from outside the Group,
    That I know of.
    That’s my PhD for you,
    Still taught me a thing or two,
    Even so. Like how to notice
    In recent years
    How now time of year
    Daddy Longlegs get
    Inside the house
    And wonder why?

  21. Skeptical says:

    I’m dying for someone to name the bioinformatics researcher who is the subject of the link in Derek’s penultimate paragraph.

  22. Stanislav Rádl says:

    Some time ago I reviewed a paper which urgently needed additional references. Since the paper was from my field, several papers from my group would be suitable to cite. However, I considered it inappropriate and recommended papers of a competitive group. I was surprised to find one our reference in the final version…

  23. Mark says:

    I do feel you’re all being very generous to editors here! It is ultimately the editor’s job to oversee the peer review and make sure it’s fair. The editor selects the reviewers, and assesses what they’ve written. I’m sure everyone here has come across reviews that were obviously nonsense, where a mere glance would show personal vitriol, bias, and irrelevance. Some journals have a rule that the reviewer’s comments must be forwarded to the author under all circumstances, which I understand. But how often have you received a review with a covering note from the editor: “Reviewer 2 is clearly off his rocker, and will be ignored. Please look only at the comments of reviewers 1 and 3”?
    The problem is that most editors are top academics doing it because it looks good on their CV, or because they feel it helps their status, or at best because they feel they’re making a contribution to the community. They have little idea what they’re doing, and no time to do it properly. Most of them cut corners abysmally, and justify not bothering to read anything by the quantity of articles that are submitted. There is no professionalism, just a jealously guarded self-assessment of professionalism. Reviewing will never get better unless the people who coordinate it get better.

  24. jbosch says:

    I’m going to cite in my next paper an article from 1972 that unfortunately the group or others never followed up. I basically rediscovered what they had done in 1972 in a different context and developed it much further and I’m happy I stumbled over that Japanese group where likely nobody is still alive. I should point out that I had the idea before finding that paper, but when writing a grant I wanted to double check that really nobody before me had that idea. It’s an obvious method to use and it will make a particular daily effort much easier and cut down the time about 120x to about 4-5 minutes. I could have left out that citation but it would not have been the ethical thing to do.

    Is it worth citing such old publications? I think so. Many current students think there’s an age-limitation to citations. If it’s older than 10 years why bother….

    1. PSImpson says:

      Look at it this way: if there are so many cases of reviewers asking you to cite their group’s papers, and so many cases of journals asking you to cite papers from their journal, what possible harm could there be in citing one old paper, of historical interest? Do it. As an acknowledgement of the group that helped you with your work. Credit where credit is due and all.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.