Skip to Content

The Dark Side

The Magic of Peer Review

Here’s an article at Vox about the problems with scientific peer review, and I have to say, it’s pretty accurate. After going over how it’s supposed to work for the site’s audience, and mentioning the real problems that have become more obvious over time, they wrap up like this:

. . .Science would probably be better off if researchers checked the quality and accuracy of their work in a multi-step process with redundancies built in to weed out errors and bad science. The internet makes that much easier. Traditional peer review would be just one check; pre-print commenting, post-publication peer review, and, wherever possible, highly skilled journal editors would be others.

Before this ideal system is put in place, there’s one thing we can do immediately to make peer review better. We need to adjust our expectations about what peer review does. Right now, many people think peer review means, “This paper is great and trustworthy!” In reality, it should mean something like, “A few scientists have looked at this paper and didn’t find anything wrong with it, but that doesn’t mean you should take it as gospel. Only time will tell.”

That last part is, I’d say, absolutely correct. Nullius in verba; the Royal Society had it right. Science does not rest on a foundation of peer review; it rests on a foundation of reproducible experimental results. Peer review is just one method to get more of those out into places where other scientists can see them and build on them. Non-scientists, particularly those with some particular viewpoint that they want to see advanced, sometimes make a much bigger deal out of peer review than working scientists do. There is no magic worked in the editorial offices. The good editors and their staffs at the good journals do all they can, but everything published in the literature is provisional, and some of it is a heck of a lot more provisional than the rest. If you’ve worked in any field of research, you know that there are crap papers out there, which were presumably let through by crap reviewers and crap editors, and entire crap journals that publish little else.

TEM image

Case in point: ever heard of Sensors and Actuators B? Yeah, me neither, but it’s not my field. In 2009, though, there were a couple of papers published there on quantum dots whose images look like. . .well, there’s one of them on the right. The most casual inspection, at least to my eyes, reveals this as a blatant cut-and-paste job, similar to the infamous nanorods paper. The same particles are repeated; they seem to float over the background, and they’re evenly spread out in a way that real particles just never manage to achieve. I particularly like the upper right part of the image, where whoever hacked this together apparently just decided “Ah, the hell with it” and plopped the same particle image down over and over. The comment thread at PubPeer has a useful key to that particular shot, and here’s the thread on the other 2009 paper, which also has issues. These problems have been called out since earlier this year in public forums, to no avail. (F. X. Coudert has been especially good on this).

One of the authors, Ramaier Nrayanaswamy, is an editor of the journal, so that might explain a few things (although it certainly shouldn’t). Other papers from the same group have been identified with similar problems: a Microchimica Acta paper from 2012, and its PubPeer comments;  and a paper from earlier in 2015 back in good ol’ Sensors and Actuators B, PubPeer comments here. These were, one presumes, peer-reviewed, but by whom? (The snarky answer is “by the peers of the sort of people who generate such images”). These journals are published by reputable firms (Elsevier and Springer), although many reputable publishing firms (Elsevier in particular) have had some pretty interesting adventures in keeping their own journals up to standard over the years.

 So yes, a close look at things that have been through “peer review” will indeed convince you that it’s not some magic seal of approval. The junk mentioned above has been in the literature for years, has been questioned openly for months, and the journals involved are doing squat-all about any of it. It’s almost as if they cared more about their business model than their reputation, and it’s almost as if they can’t quite manage to see the connection between those two things. . .

 

35 comments on “The Magic of Peer Review”

  1. Ash (Curious Wavefunction) says:

    Peer review is a bit like nominating a presidential candidate. A paper which passes peer review is analogous to the right number of the right people picking their favorite candidate for president. It seldom says much about the objective merits (or the lack thereof) of the individual.

  2. Anon says:

    Above all, I would like to know how or why scientific fraud should be considered any more legally acceptable than financial fraud, or any other kind of fraud for personal gain at others’ expense?

    Surely, if fraud can be proven (as I imagine in this case), then those responsible should be held accountable in a court of law.

    And, in my view, those who “punish” such fraud with just a slap on the wrist are complicit in said fraud and should also be held accountable.

    1. Algirdas says:

      Yes, of course, we need lawyers to get involved. I can hardly believe we (the scientists, the geeks) discovered things like semiconductors, general relativity, or CRISPR bacterial immunity systems without university lawyers or government prosecutors breathing down our necks. I wonder why even today am I allowed to work in the lab without proper legal oversight? If not by proper attorneys, then at least by some government regulatory agency? Because you know, I could have easily defrauded funding agency, nay, the wider society itself, by mislabeling my plasmid minipreps this morning.

      Perhaps the members of exalted legal profession are busy fighting financial fraudsters of all sorts – that’s why we don’t get things happening in the financial sector like Bernie Madoff, Lehman Brothers bankrupcy, and a 7+ trillion dollar bailout, right?

      1. Anon says:

        Frankly, unless you are engaged in (or want others to engage in) intentional scientific fraud, then I don’t see why you should have a problem with people being held to account for it in a court of law.

        1. dave w says:

          Hmmm… Aside from the fact that this would most likely involve some broadening of the legal definition of “fraud”, do we really want a situation that encourages folks to “take it downtown” and get the law(yers) involved in disputes related to scientific claims?

          1. David Cockburn says:

            I once was involved in a clinical study in which one of the investigators was found to have ‘invented’ his patients. This was detected when it was noticed that the routine assays were all suspiciously invariable. He set back our clinical program significantly by forcing us to audit all the other studies in it and I imagine he had a similar impact on the programs for other companies that he worked on. I’d want the lawyers to get involved in suing him as he effectively stole from us and them.

  3. Seb says:

    Well I’ve worked in chemical sensors quite a bit and I’d say Sens. Actuators B is relatively widely cited in the field, and generally trustworthy (at least that’s what I thought till now) but not awesome… think Tet. Lett. for organic synthesis. If you can figure out the crap from the rest, you might find useful papers…

  4. Ex-academic says:

    Haven’t read the article yet, but I have to object to the title (which I realize was probably forced on them by editorial staff). The problem isn’t that “peer review” doesn’t work, the problem is with very limited pre-publication review. Post-publication peer review has been extremely effective at filtering out some of the garbage that makes it through – or it would be, if journal editors weren’t so unresponsive. Lumping these together in the public’s mind seems dangerous.

  5. Anon says:

    1. Subscribers should sue journals for publishing fraudulent data.
    2. Journals should sue research institutions for allowing researchers to submit fraudulent data.
    3. And research institutions should sue individual researchers for submitting fraudulent data.

    There, that should stamp out all incentives for fraud.

    1. Phil says:

      Except for the lawyers. They win in each of those scenarios.

      1. Anon says:

        Well, at least until the fraud is stamped out.

    2. Anon 2 says:

      I agree, but then the journals should pay reviewers to do a proper job.

      1. John Galt says:

        Reviewers should refuse to work for free anyway.

  6. InfMP says:

    martin-shkreli bought the one of one wu-tang-clan album for 2 mill

    1. Kevin Parker says:

      ‘R&D’

      1. jtd7 says:

        Speaking of, the following exchange appeared in my Facebook feed last night. From Warsaw Bar in Brooklyn NY:

        * Pretty sure Martin Shkreli is at this Deerhunter show. We keep calling his name and he won’t answer, though :-/
        * OMG please heckle him
        * Ask him about that Wu tang album he bought for 2 mil.
        * Ask him to buy you a drink and then change the order to 750 drinks

        I should add most these people are artists, not pharma professionals. Shkreli has made himself a public figure.

  7. Mark Thorson says:

    I don’t mean to be impolite, but how many gripers have offered to review papers for journals? It’s not easy work, and I have some sympathy for the editors who try to recruit reviewers. I was invited, and I couldn’t get past the first paragraph of the paper I was invited to review without finding errors great and small. I had to turn down the invitation because I just didn’t have the time to do a proper review. I already do enough for that journal, so I didn’t feel compelled to give them even more of my time. But I do consider it a professional duty to support our organizations and their journals when we can.

  8. Me says:

    Yeah funny that!

    The only papers I ever got asked to review were papers published by friends, who nominated me as a reviewer – following advice from on high: ‘to H**l with scientific objectivity – get your friends to review your papers!’

    1. tangent says:

      “papers published by friends, who nominated me as a reviewer”

      Yipes. Is that a common thing, in what fields?

      The process I’ve worked with, okay, it fails at blinding — it’s not hard for the author to hear who’s in the reviewer pool and make a good guess which one will be the best fit for a given paper, and not at all hard for the reviewer to count up the citations and name the author. But that’s a far cry from getting to cherrypick the reviewer yourself.

  9. Hap says:

    I’m pretty sure I’d rather read a peer-reviewed journal than one that’s not. Peer review’s not perfect and not magic (or shouldn’t be treated as such) but it seems like a reasonable sanity filter and better than nothing (though looking at those pictures and Pease’s, maybe not as much of a sanity filter than I’d like). I’m not sure what the other alternative systems are for review, and don’t know whether they work any better (or at all), and in the absence of a comparable alternative, what do you do, exactly? Post-publication review should help (anything), but as long as incentives to publish (by authors and journals) are what they are, it seems like any system is going to have a difficult time.

  10. Sili says:

    published by reputable firms

    Elsevier

    AHAHAHAHAAHAHAHAHAHAAHAAAAAAAAAaaaaaaaaaaaaaaaaaa

  11. To borrow from Winston Churchill: “peer review is the worst method of assessing the quality of research, except for all the others.” Yes, there are problems, but most of the proposed solutions read like a business plan from a Stanford MBA candidate: “we’re going to fix it with the Internet!”.

    mg

  12. dave w says:

    If it weren’t for the need for the “professional brownie points” that come with Formal Publication in a ‘Recognized Journal’, said journals would become obsolete – everyone would just post their papers as PDF files on the net… of course that would mean that Academic Science wouldn’t be able to do the “publish or perish” thing: but would that really be so great a loss?

  13. Sara says:

    Talking about peer review, any thoughts about this? I figure probably not since this is a blog found on AAAS… http://cenblog.org/the-safety-zone/2015/12/patrick-harran-elected-as-a-aaas-fellow/

  14. Sam Adams the Dog says:

    I liked the comment about post-publication peer-review. I think the best way forward would be to open up forum commentary on the internet associated with published articles. That presents the problem of spam and ideology interfering with bona-fide scientific discussion, but still would counter the tendency of necessarily limited pre-publication peer review to be too cursory, and as a general practice, I think it would be a net gain.

    By the way, the most prevalent problem I’ve encountered in articles I’ve been asked to review has been lack of clarity: an exposition so poor that I cannot figure out what the heck the author is trying to say, or what he actually did. An article that is unreadable is unreviewable, and all too often I have had to send articles back to the editor, saying just that.

  15. I’ve had to read a lot of epidemiology papers since moving into my current (non-med chem) job, and I’ve been appalled at the quality.

    Not fraud, just maddening sloppiness. Numbers that do not correspond between table and text, incorrect totals at the foot of tables – that kind of thing. I often wonder if I’m the first person who has ever actually paid attention to the data the paper is reporting.

    I am sometimes driven to write to the authors seeking clarification (not to grumble that errors were made, but simply because I genuinely need the numbers they were purporting to have studied). Rarely do I receive a response.

    1. Mark Thorson says:

      I’ve edited a number of papers, and I’d suggest authors always should do two things: a) spellcheck it before showing it to anybody else and b) read through the whole thing and read the whole thing again if you make any changes. Don’t send it out until you’ve read through the whole thing once without making any corrections at all. It’s best if you get a night’s sleep before making your last read-through, because you’ll see it with fresher eyes.

      Seeing a spelling error tells me you haven’t made an effort to polish this turd before handing it to me. Seeing a garbled sentence tells me you only wrote and didn’t read it.

      This is the bare minimum you must do if you want me to respect your paper enough to edit it. If you really want to go the extra mile, make sure you’ve applied a consistent bibliographic format — I will check every single one — and make sure the figure references call out the intended figure — I will check those too. Don’t rely on me to catch every error if there are literally hundreds of them. Your chance of having a first-class result are greatly increased when you’ve already done your best effort before I see it. If I spend half an hour marking up your bibliography, that’s a half hour I’m not spending in improving your argument.

      Another final pass suggestion is to do a global search for “it”. For each instance, ask yourself if there’s any way anybody could misinterpret what you were referring to. Would it hurt to replace “it” with a definite reference, like “the yield of the reaction”? I try to imagine the reader is a non-native speaker of English, so I’m willing to sacrifice a little verbosity if I’m not being too repetitive.

  16. tangent says:

    Understandable that journalists and other people would take peer review as the big thing, a golden stamp of approval. But they also hype studies that aren’t even peer reviewed, so…

  17. Isodore says:

    Peer review is a necessary but certainly no longer sufficient (if it ever was) condition for assessing the quality, veracity and importance of a scientific publication.

  18. Erebus says:

    Re: post-publication peer-review.

    …In what way is that supposed to differ from what PubPeer and the PubMed Commons are already trying to do? Is the suggestion that it become mandatory? I think that participation might be a bit of a problem, if so, as there’s really rather little discussion on both forums. (The very vast majority of papers pass by without comment — including those with lots of citations.)

  19. dearieme says:

    When I was young I was told that my first duty as a reviewer was to ensure that the paper was clear enough, and complete enough, that the reader could catch any error that I might have missed. The foe is obscurity, which disguises error. In the decades that followed the quality of English written by scientists underwent a monotonic decline: so much for clarity.

  20. Validated Target says:

    @Mark Thorsen (and others) about volunteering to peer review. WHEN I am asked to review, I take time to do so carefully. I reviewed and rejected (with detailed critiques) four papers for a journal. None of them were published. Then the editor stopped asking me to review. That journal publishes a mixed bag of good, bad and terrible papers.

    I agree with the need for a mechanism for post-publication review or comments. Certainly, on-line publishing should be able to allow for that. That would keep comments with the source w/o having to search other non-indexed websites for more info.

  21. loupgarous says:

    http://blogs.wsj.com/chinarealtime/2015/08/25/fake-peer-review-scandal-shines-spotlight-on-china/ shows that fudging peer reviews to get papers in prestigious journals seems to happening for fun and profit some places.

    Peer review is by no means magic, and carries only a reasonable presumption, not an assurance, of good faith on the peer reviewer’s part.

    One of the more blatant instances of bad faith by a peer reviewer for a major publication was described by science journalist Gary Taubes writing in Discover magazine (“The Name of the Game is Fame, But is it Science,” 1986). When Paul Brown, an early researcher with the National Institutes of Health on diseases submitted a paper to The New England Journal of Medicine on his work on the cause of the group of diseases which include kuru and “mad cow disease”, Stanley Prusiner – who’d just coined the word “prion” to describe that cause, was one of the article’s peer reviewers. Prusiner sent NEJM a three-page critique of Brown’t work and recommended they not publish it, then promptly submitted his own paper on the same material to the same journal. It was not a bright day for Prusiner, who’d arguably been caught abusing the peer review process to build the edifice on which his “prion” hypothesis rested, or for the NEJM, who’d been hornswoggled. Carleton Gajdusek, co-discoverer of kuru and who was responsible for characterizing the whole family of slow infectious encepalopathies, had to campaign hard to get Brown’s paper published (NEJM rejected Prusiner’s paper).

    Of course, this was an exceptional case. Mostly, peer reviewers don’t seem to have enough power to send papers back for more work, judging from recent scandals in which (as Derek Lowe aptly put it) crap was sent out between the covers of journals connected with trusted names such as Nature.

    In the UK, there’s now a requirement that researchers taking government money publish in open access journals (instead of sending “preliminary” copies of their work to open access sites such as arxiv.org and publishing final work in pay-wall sites). In this brave new world, peer reviewers ought to be the Atlases on whose shoulders the responsibility for assuring the integrity of scientific publication rests, but that’s largely up to journal editors.

  22. loupgarous says:

    We talked about peer review and open-source journals in the wake of the paper Derek exposed as “deliriously incompetent fraud “ which turned out to be “Crap. Courtesy of a Major Scientific Publisher” over in wikipedia’s Reference Desk/Science.

    One idea that came up is for peer-reviewing to be externally-financed (someone suggested “financed by NIH” for biomedical journals, and one can extrapolate for other disciplines), That way, it doesn’t matter if a co-author of a paper is also an editor of the journal in which it appears (that always fails the “smell test” with me), because the peer reviewers are paid by some agency unconnected to the journal.

    That might also solve the problem I’ve read about from researchers posting in “In The Pipeline” of researchers who’ve been asked to give a peer reviews no longer being asked to do so after giving thoughtful, properly critical reviews of scientific manuscripts.

    Of course, it might not. Government funding of peer review and research is fraught with its own perils, as some climatologists have complained. There are probably a few political hot buttons in every scientific discipline, from paleontology (“‘Patrick Stewart’ skull, eh? Let’s dump eight tons of gravel on THAT dig site”) to entomology (“Why are the bees dying?”).

    Making peer review a government function might be a cure worse than the disease. But it’s clear peer review needs… something.

Comments are closed.