Skip to Content

The Scientific Literature

Predation

“Beall’s List”, as a way to keep track of predatory publishers, has been officially offline for some time now. Jeffrey Beall himself has said that it was taken down under “intense pressure” from his employer (the University of Colorado at Denver), although his employer says no, not at all, this was his personal decision. (I know which side of that story I believe more). The businesses in this area are a touchy and litigious lot – well, at least in threatening to file lawsuits, anyway – and many legal departments would surely just rather not deal with all of the potential trouble.

It now appears, though, that the list is back online in updated form, but is being maintained anonymously. One of the people behind it told Nature that they spend several hours a week dealing with inquiries and maintaining the site:

If the journal titles aren’t already listed, the manager says they carry out an “in-depth analysis” of the publishers’ policies, checking them against a set of criteria originally laid out by Beall, and researching whether they are indexed on journal ‘whitelists’, such as the Directory of Open Access Journals or Journal Citation Reports. Journals or publishers deemed untrustworthy by the manager are included in an ‘update’ addendum on the blog. By March 2018, the new site had added 85 stand-alone journals and 27 publishers to Beall’s original lists of more than one thousand titles.

Good for them. There are people who object to the idea that such journals are being called out anonymously, and there’s no doubt that the situation is open to potential abuse (although that doesn’t seem to be what’s happening so far). But what else to do when you’re barraged by threats if you do this work openly under your own name? Cabell’s is a company that has taken on the challenge, with a subscription model blacklist (and whitelist), and it would be interesting to know what kind of flak they get from the hack publishers of the world.

As that last link says, though, there can be some confusion (and has been on Cabell’s list and others) between low-quality journals and predatory ones:

So what are the problems? The most serious is that, as currently configured, Cabell’s Blacklist perpetuates the common problem of conflating low-quality journal publishing with deceptive or predatory publishing. In this case, the conflation happens because many of the blacklisting criteria Cabell’s applies are really quality criteria (“poor grammar and/or spelling,” “does not have a clearly stated peer review policy,” “no policy for digital preservation,” etc.) that can easily end up gathering fundamentally honest but less-competently-run journals into the same net as those journals that are actively trying to perpetrate a scam. Predatory and incompetent journals do often evince some of the same traits, but these traits don’t always indicate predatory intent.

These things overlap to a good degree, but not perfectly. Many of the predatory outfits, the we-have-three-hundred-journals-and-add-two-every-day folks, tend to be very sloppy and unprofessional quick-buck artists. But there are some rather slick ones, too, that at first glance don’t look that much different from actual journals. Meanwhile, there are actual journals, the smaller and/or newer ones, that can be rough around the edges.

What do I mean by an “actual journal”, then? There are perfectly good open-access journals, so we can’t say that the distinction is whether someone has to pay to read it. I’d say that the big distinction is some level of editorial control – and by that, I don’t only mean “Just a proportion of the papers make it in”, although that’s often the case. Even in more wide-open journals such as PLoS ONE, papers are reviewed to make sure that they appear to be technically sound (they just don’t make calls on a paper’s impact or importance), and the same goes for many of the other respectable open-access titles.

And that brings up another criterion: a real journal has some care for what it publishes. A fake or predatory journal, though, does not give a damn. Their reason to exist is to publish as many manuscripts as they can get and collect the payments for them, and if those manuscripts are faked, incoherent, or plagiarized? Oh well. Did the payment clear? All right, then. Plenty of other journals and their publishers are trying to make money – and how – but their method of doing so runs through some variation of “Attain and preserve some level of respect from the readers and authors”. That’s as opposed to “Shear those sheep and keep the line moving”.

It is impossible to publish everything that comes in the door and maintain that respect, because the stuff that comes in that door is seriously contaminated with garbage. Here’s a famous post from the science-fiction publishing world, from editor Theresa Nielsen Hayden, called “Slushkiller”, about what comes in the transom where she works. About 70% of the submissions can be dismissed out of hand, usually within seconds, because the writing is prima facie disjointed or incoherent. Another 25% of the stuff is written reasonably competently, but has a totally overused or borrowed plotline or has some other such flaw that would make it unsaleable. Now you’re down to the last five per cent or so, which is where the real editorial decisions actually start.

Scientific publishing is not as different as one might hope. I don’t know how many green-ink religious revelation manuscripts Nature or Science get, but I’ll bet its a nonzero number. There are surely a lot of papers that come in written in such a disorganized manner that they cannot even be reviewed. Beyond that, there are many, many manuscripts that are readable and apparently not wrong, but are still not right for the journal they’ve been submitted to (often because of those impact and importance factors, which is where PLoS ONE, Scientific Reports, etc. come in). Nature, for example, has said that it rejects about 60% of submitted manuscripts without review, so although the proportion is a bit better than it was at Tor Books, it still means that a solid majority of all submissions don’t even clear the first hurdle of editorial attention. PLoS ONE, for its part, has said recently that it ends up rejecting about 50% of its submissions.

The contrast with a predatory journal could not be greater. They can, and do, publish whatever gibberish comes in, as long as the money is good, while all the time pretending to be what they are not. Editorial attention of any type is incompatible with their business model. That, to me, is the difference, and that is why these journals must be shunned.

24 comments on “Predation”

  1. Dan says:

    Why not name all the good journals? It’s a Herculean task, but as actual entities, rather than figments of a scammer’s imagination, they have a certain solidity of existence and you can expect them to help you. Naming bad journals is like enuemrating fog.

    1. MP says:

      There are probably tens of thousands of legitimate academic journals. Maintaining a list of them would be significantly more effort than listing the small percentage of all journals that are predatory.

      1. Dan says:

        That depends on how often you get sued for maintaining the “bad” list and how often they are reinvented. I get emails almost daily from entirely fictitious conferences and journals. I’m sure many are invented at the drop of a hat, that’s a pretty big space to catalogue, too.

        You could cover many with the co-operation of a few major University or national libraries. There’s no shortage of professional catalogues out there. The rest would probably be keen to assist.

        1. Matt says:

          Dan, you’re absolutely right that assembling and maintaining a list of journals would be a relatively trivial task. Many catalogers and other librarians will, I’m sure, leap into action to put together such a list, just as soon as we have usable criteria for what makes a “good” journal. You can help us out with that, right?

          Please note: at a minimum, your definition of a “good journal” needs to satisfy the following conditions:

          1. The criteria for a journal to get on the “good” list need to be applicable across disciplines–the list should be as valid for professors of music or nursing as for particle physicists and philosophers.

          2. The criteria need to be based on individual journals, not on publishers, because a publisher may have hundreds of journals with different editorial boards, processes, and affiliations. After all, it’s absurd to say that Elsevier’s 3 journals for homeopathy research have the same level of scientific rigor as The Lancet.

          3. The criteria need to be equally valid for theory and praxis. Or to use Boyer’s model, the scholarships of discover, integration, application, and teaching.

          4. The criteria should be equally valid for all researchers, including: a student, a post-doc, a scholar seeking a high-powered research position at Harvard or Cambridge, a professor who just wants tenure so he can spend more time teaching, a researcher whose evaluations depend on quantity of publications instead of on quality, a researcher employed in industry or outside of academia, a tenured researcher who only cares about sharing experiment results, and/or a member of the public looking for in-depth information on a topic.

          5. The criteria should be equally applicable and relevant for authors at public and private institutions: research universities, comprehensive universities, undergraduate institutions both large and small, religious colleges, professional schools, liberal arts colleges, and community/technical/trade colleges. Plus commercial, non-profit, and governmental research institutions and research units within a larger organization.

          6. The criteria should not penalize journals for authors/editors/reviewers who are non-native speakers of English, or who speak/write in a particular dialect of English.

          7. The criteria should not value style over substance. Perfect copyediting and flashy design shouldn’t be more important than valid scholarship.

          8. The criteria should assess journals based on their quality, not on their publishing models. A nonprofit society-run journal, a commercially-published journal, a library-published journal, and a independently-published journal should all be evaluated on the scholarship they publish, not on who pays for them.

          9. The criteria should apply equally to subscription journals, fee-based open access journals, and no-fee open access journals. (One of the biggest criticisms of Beall’s list is that it only bothered to evaluate open access journals.)

          10. The criteria should not reward journals who directly or indirectly encourage authors to use p-hacking or exaggerate the novelty of their work.

          11. If considering citation counts, the criteria should accommodate the varying paces and frequencies of citations across disciplines.

          12. If considering citation counts, the criteria need to acknowledge and compensate for the fact that a citation isn’t necessarily a good thing. (I would cite you the same way whether I am building on your revolutionary discovery or disproving your unsubstantiated garbage)

          13. If considering citation counts, the criteria need to prevent distortions from one or two incredibly high-value articles. (500 citations for 1 out of 100 articles has the same Journal Impact Factor as 5 citations for each of 100 articles)

          14. The criteria should not penalize new journals or reward well-established journals. A journal’s age has no bearing on the research it might publish.

          15. The criteria should not assume that any particular set of cultural norms, societal structures, or organizational systems are universal. (Even in the English-speaking developed world, the higher education sector varies significantly between the US and England–and even among US states.)

          I’m sure there are more functional criteria that a working definition of “good” journals will need, but why don’t you start on these? Let us all know when you’ve developed something to match all of these conditions, and we can go from there.

          1. metaphysician says:

            And this is what I tend to call a “Leave this one for the superintelligent ASIs later” problem. . .

    2. KOH says:

      At major Chinese universities, there is such a list of “good journals”. If you publish papers outside of that rather extensive list, then that publication doesn’t get credited toward your required annual publication quota. Having seen the list, it pleases me that there is poor overlap with Beall’s list.

  2. Sofia says:

    Although you make a good and clear distinction between low quality and predatory journals, should we really be all that concerned about throwing babies out with the bathwater?

    Even if low quality journals are getting caught in blacklist efforts due to their poor content controls – it should serve more as a wake up call to them to get their act together than it is for blacklist makers to ease up on the noose. You don’t combine your 75% fractions with your 99% pure ones, after all.

    This of course, assumes that a journal which makes headway to improving their quality would be removed from the blacklist (and vice versa, one which declines in quality could be added).

    1. Mad Chemist says:

      Agreed. If a journal doesn’t explain its peer-review process and has poor spelling/grammar, but they’re charging people to publish papers, then they need to improve. If they don’t improve, then as far as I’m concerned, they’re predatory, even if they don’t intend to be so.

      1. anon says:

        Just because a journal has good spelling/grammar doesn’t mean they are not predatory. There are VERY famous publishers where you pay A LOT and wait about a year for your work to be published. I would argue that these journals are not much different than the predatory ones Derek mentions. In the end, the science is not good, the reviewing process is broken and it all comes down to the money you pay. The whole system is broken.

  3. Uncle Al says:

    Reselling actual crap and generating citations is big business. Respect that occasional crap is not crap at all. Consider cis-platin arising from an ineptly conducted stupid project amplified by claimable embezzlement of laboratory funding, followed by the host university screwing its faculty member…for succeeding, albeit via deviation from professional trajectory.

  4. Emjeff says:

    The University of Colorado at Denver should hang their heads in collective shame. Beall was doing useful work for the scholastic community, and this so-called institution of higher education gets their panties in a wad because they might get sued. I think the risk is over-blown, because it anything went to court, the predatory publishers would lose big-time, but ok, you might get sued. So what? It’s the price we must pay for maintaining academic credibility.

    1. dave w says:

      I think the issue is that having a lawsuit pending can be a Bad Thing in itself… for one thing, even for the most absurd claims, you’re now stuck with the task of sending someone to court to argue it down, lest the case be resolved against you by default… and then there’s the issue that if one has any open purchase accounts with vendors (as institutions commonly do), the fine print in the invoices often declares that any amounts outstanding become instantly due under certain circumstances, such as if the purchaser “declares bankruptcy or is sued”. (In general, any pending lawsuits drive an entity’s credit rating to zero.)

      1. Bagger Vance says:

        UCD is a state-backed (near) billion-dollar endowment institution with a raft of lawyers on salary. It’s hard to believe with their resources they’d be running scared of a few fly-by-night publishers filing nuisance suits out of their garages but apparently there we are.

    2. David Young, MD says:

      It is a fear known to the chess grandmaster Aron Nimzowitsch when his adversary Emanuel Lasker set down an unlit cigar on the table as they were playing chess. “The threat is stronger than it’s execution.”

  5. Thoryke says:

    Dealing with predatory publishers isn’t just a hazard to researchers who ought to be sending their findings to legitimate sources [and to naive people who are driven to publish by any means available]. It can also be weaponized to transmit viruses and dig into otherwise closed electronic systems.

    Researchers at dozens of universities were tricked into sharing their university computer access passwords with foreign agents, who used a devilishly clever ruse: claiming that “I read and liked your paper….” then providing fake links that would lead to convincing mock-ups of the researcher’s home university login page. Researchers who fell for that bait were likely to assume they just needed to re-authenticate [as happens when one is ‘kicked out of the system’ for whatever random reason] but, when they did that — SURPRISE! — They’d just given their login credentials to people who should never have them.

    Details here: https://www.justice.gov/usao-sdny/pr/nine-iranians-charged-conducting-massive-cyber-theft-campaign-behalf-islamic

    Hat tip to the BELS editors (US) and the Society of Editors and Proofreaders (UK), whose work really matters….

  6. Duncan Bayne says:

    On the off chance the maintainers of the list are reading this: I suggest using IPFS to persist the list, for reasons of decentralization and resilience to censorship.

    http://www.atnnn.com/p/ipfs-hosting/

    Feel free to fire me an email if you’d like more details.

  7. milkshake says:

    I have had a recent bad experience that convinced me ACS now also belongs to the list. The bad faith behavior of the editors of the Journal Biomacromolecules, and the actual response I got from the ACS Chief Counsel of liaison to the ACS Ethics board is quite out there. I asked the editors to investigate retaliation plagiarism of my research and invention (covered by seven patent applications and a ACS conference presentation) and they prevaricated, dragged their feet and then flat out refused to even look into the matter. They acted like complete crooks. Then the ACS lawyer told me to get a lawyer… The funny thing the CEO of my former employer who did this to me in retaliation had to resign for a misconduct in the meantime.

    1. The Iron Chemist says:

      It is astonishing how gutless the major journals are when it comes to dealing with bad behavior that slips through the peer review process. I’ve sworn off publishing in RSC journals over that “MadLibs”-style paper that ripped off Chuan He’s work several years ago.

  8. Thomas Lumley says:

    Lower-ranked genuine journals may actually have a higher rate of ‘desk rejects’ than top journals — partly because the best stuff doesn’t make it that far, and partly because getting reviewers takes more effort. (Based on my experience on the editorial side of a couple)

  9. PR Raghavan says:

    I recently sent a paper to a so called top journal and they acknowledged receipt on line at 9.43 AM and rejected it at 10.01 AM. eighteen minutes later. It was a invitro study of a Zika Virus inhibitor
    Another publication rejected a paper in 2 hours on a ester sunday !!
    So much for peer review
    What we have is a swamp and a deep state just like in D.C

    1. Bla says:

      You do realise not all manuscripts go to review? Editors make the call. Reviewers get swamped enough already. If your manuscript was rejected that quickly, it suggests there was something obviously and fundamentally wrong with it.

    2. Kevin says:

      Editors routinely reject papers prior to sending them out for peer review. I’ve got a whole pile of such rejections. Often it happens at the weekly “slush pile” meeting; sometimes a paper is so unsuitable on its face that it can be rejected on sight. (Submission is clearly out of the journal’s scope; results too difficult to find in manuscript, or not sufficiently novel; clarity of submission egregiously poor; submission wildly outside guidelines for length or figure numbers; etc.)

      Peer reviewers are a precious resource. Forcing them to waste hours formally evaluating something that an editor is competent to assess in a few minutes of reading would be unsustainable.

      It’s 2018, and editors have mobile devices now. They can access their manuscript submission systems at work, at home, on the bus, in bed, even on the loo. Some editors are better than others at segregating their work and home lives. (One of my colleagues received an acceptance – after positive external peer review – on New Year’s Day. It was mid-afternoon, so we presume the editor had time to sober up first.)

      “What we have is a swamp and a deep state just like in D.C”

      Derek, why is the Russian intelligence service trying to influence your blog?

    3. halbax says:

      I’m starting to see a trend with your submissions. Have you considered the possibility that it only takes a few minutes of reading to determine that your papers are a smoking pile of poop?

    4. DCRogers says:

      And not to assume the worst, the paper may be a fine specimen, but the topic one that does not fit (or the opposite, been over-covered) in a Journal. For example, computational papers are rarely considered for JACS – whether this is a blessed feature or a horrible bias I’ll leave to your judgement.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.