Here’s an article at Vox about the problems with scientific peer review, and I have to say, it’s pretty accurate. After going over how it’s supposed to work for the site’s audience, and mentioning the real problems that have become more obvious over time, they wrap up like this:
. . .Science would probably be better off if researchers checked the quality and accuracy of their work in a multi-step process with redundancies built in to weed out errors and bad science. The internet makes that much easier. Traditional peer review would be just one check; pre-print commenting, post-publication peer review, and, wherever possible, highly skilled journal editors would be others.
Before this ideal system is put in place, there’s one thing we can do immediately to make peer review better. We need to adjust our expectations about what peer review does. Right now, many people think peer review means, “This paper is great and trustworthy!” In reality, it should mean something like, “A few scientists have looked at this paper and didn’t find anything wrong with it, but that doesn’t mean you should take it as gospel. Only time will tell.”
That last part is, I’d say, absolutely correct. Nullius in verba; the Royal Society had it right. Science does not rest on a foundation of peer review; it rests on a foundation of reproducible experimental results. Peer review is just one method to get more of those out into places where other scientists can see them and build on them. Non-scientists, particularly those with some particular viewpoint that they want to see advanced, sometimes make a much bigger deal out of peer review than working scientists do. There is no magic worked in the editorial offices. The good editors and their staffs at the good journals do all they can, but everything published in the literature is provisional, and some of it is a heck of a lot more provisional than the rest. If you’ve worked in any field of research, you know that there are crap papers out there, which were presumably let through by crap reviewers and crap editors, and entire crap journals that publish little else.
Case in point: ever heard of Sensors and Actuators B? Yeah, me neither, but it’s not my field. In 2009, though, there were a couple of papers published there on quantum dots whose images look like. . .well, there’s one of them on the right. The most casual inspection, at least to my eyes, reveals this as a blatant cut-and-paste job, similar to the infamous nanorods paper. The same particles are repeated; they seem to float over the background, and they’re evenly spread out in a way that real particles just never manage to achieve. I particularly like the upper right part of the image, where whoever hacked this together apparently just decided “Ah, the hell with it” and plopped the same particle image down over and over. The comment thread at PubPeer has a useful key to that particular shot, and here’s the thread on the other 2009 paper, which also has issues. These problems have been called out since earlier this year in public forums, to no avail. (F. X. Coudert has been especially good on this).
One of the authors, Ramaier Nrayanaswamy, is an editor of the journal, so that might explain a few things (although it certainly shouldn’t). Other papers from the same group have been identified with similar problems: a Microchimica Acta paper from 2012, and its PubPeer comments; and a paper from earlier in 2015 back in good ol’ Sensors and Actuators B, PubPeer comments here. These were, one presumes, peer-reviewed, but by whom? (The snarky answer is “by the peers of the sort of people who generate such images”). These journals are published by reputable firms (Elsevier and Springer), although many reputable publishing firms (Elsevier in particular) have had some pretty interesting adventures in keeping their own journals up to standard over the years.