Skip to main content
Menu

The Scientific Literature

The Cream Rises to the Top, But So Does the Pond Scum

Here’s a rather testy letter to the editors of The Lancet about some recent work published there by Novo Nordisk and collaborators.

Both trials produce the same finding. . .Each focuses its main conclusion not on this primary outcome, but on one of several secondary measurements: nocturnal hypoglycaemia in the first paper and overall hypoglycaemia in the second. In both, the difference was of marginal significance and no mention is made of adjustment for multiple testing. These lower hypoglycaemia rates in unblinded studies should be considered, at best, hypothesis generating. At worst they are spurious. . .
The Lancet’s reprints are a major source of revenue for the journal, and a major part of drug company marketing. These trials were written and analysed by NovoNordisk statisticians and NovoNordisk-funded professional writers. We applaud their skill, but regret the lack of editorial effort deployed to balance it. . .

“What are editors for?”, asks the letter. This brings up something that we all may have to contend with if the scientific publishing model continues to change and erode. The publishers themselves make much of their status as gatekeepers, citing their coordination of the peer review process and their in-house editing. (The counterarguments are that the peer review is being done by free labor, and not always very effectively, and that the quality of the in-house editing varies from “pretty good” to “surely you jest”).
These papers are a case in point. What if they are, as the letter writers contend, largely just vehicles for marketing? That sort of thing certainly does happen. Will it happen even more under some new scientific publishing system? You’d have to think that the marketing folks are wondering the same thing, but from the standpoint of a feature rather than a bug.
Marketing, though, would rather have papers to point at that are published in a prestigious journal, which is one reason that letter is being sent to The Lancet. And no matter what sort of publishing model comes along, I don’t think that we’re ever going to get rid of prestige as a factor, human nature being what it is. (And beyond that, having a stratum of recognizably prestigious journals does have its uses, although its abuses can outweigh them). It is, in fact, the prestige factor that’s keeping the current system afloat, as far as I can see.
The only thing I can think of to replace it that wouldn’t be as vulnerable to the same abuses would be one where papers float to the top through reader comments and interest. Upvotes, downvotes, number of comments and replies, number of downloads and page views – these might end up as what people point to when they want to show the impact of their papers, along with the traditional measures based on citations in other papers. But while that might avoid some of the current problems, it would be open to new ones, various ways of gaming the system to boost papers beyond where they naturally would end up (and to send rival work down the charts as well?) There’s also the problem that the most-discussed papers aren’t a perfect proxy for the most important ones. A harder-to-comprehend paper, made that way either through its presentation or through its intrinsic subject matter, will make less headway. And deliberately buzzy, controversial stuff will rise faster and higher, even if it’s not so worthwhile on closer inspection.
It’s probably impossible to come up with a system that can’t be gamed or abused. I won’t miss the current one all that much, but we’ll have to be careful not to replace it with something worse.

18 comments on “The Cream Rises to the Top, But So Does the Pond Scum”

  1. RB Woodweird says:

    Well, which would you rather have: a publication which you knew, as an open source vehicle, might be more prone to manipulation so you put on your BS detector before reading each and every submission or a publication which you assume has already filtered out the dreck so you read every article with an unguarded mind?

  2. Not a chemist says:

    I’m thinking something along the lines of how Google does its page rank thing. You have assorted people who do blogs and others who write actual papers and so on, and from there you’d build a web of sorts based on how well ranked commenters are, who are in turn ranked by their standing/network/comments, and so on.
    Easier suggested than written of course.

  3. Will says:

    I feel like I’ve read somewhere that there is a whole industry of people that are paid to write online reviews for products/place. Any entity that that is not ethically bothered to employ ghost writers to pump out “articles” as advertising copy would probably not be afraid to employ a legion of sock puppets to pump up open source publications.
    Then again, this could be a new career for scientists laid off by the very same companies!

  4. Toad says:

    Derek, please double check the link to the article.

  5. Bruce W says:

    It seems that any system of institutional review, community trust or peer reputation can be subverted by people with a motivation to do so. Bernie Madoff and Enron both come to mind.
    I agree with Woodweird that one problem with curated journals is that the gloss of prestige tend to blunt the readers’ immune response to BS.
    The Atlantic ran a story that I think is interesting and somewhat relevant.
    http://www.theatlantic.com/technology/print/2012/05/how-the-professor-who-fooled-wikipedia-got-caught-by-reddit/257134/

  6. mass_speccer says:

    There are things like “Faculty of 1000” which styles itself as post-publication peer review (http://f1000.com/). I think the reviewers for that are selected based on knowledge/standing so that not just anyone can rate a paper. It would obviously be a pretty big undertaking to extend this (or similar) to all publications though…
    One thing that I’ve always wondered about any new system is the fate of all of the literature published under the old system. Presumably the big publishers will continue to charge subscriptions for access to all of their archive content which will probably continue to be relevant for a few years yet…

  7. Derek Lowe says:

    Just fixed the link – sorry!

  8. Morten G says:

    Aren’t the most important papers the ones still talked about and cited decades after they’ve been published? There would be a lag but it should be possible to find out what the most important stuff is (of course sock puppets are still a problem then).

  9. Anonymous says:

    The problem with the reader interest / voting system is that many people don’t think much before opining. So many comments on blogs start with the phrase “I’ve not read the paper but….”. (I’m going to broadcast my views on it anyway. Nearly always derogatory.

  10. lynn says:

    @6 Faculty of 1000 is actually starting an open access/online “publishing program” of its own
    http://f1000research.com/2012/01/30/f1000-research-join-us-and-shape-the-future-of-scholarly-communication-2/
    I have reservations about it – but it may provide an answer.

  11. Mark Murcko says:

    We covered this in “Alpha Shock.” Maybe it will happen sooner than we thought! Cheers.
    The AA [Auto-Assistant] started by running a search against ScienceDigg. In the early 2020s, scientists had tired of the politics surrounding journal publication and simply begun blogging their results. The thousands of scientific blogs created a great deal of confusion and made it difficult to find the best science. In an attempt to identify the highest quality work, Science Magazine merged with Digg.com, to create a system that allowed scientists around the world to rate the work of their peers. A few assistant professors seeking tenure had initially tried to game the system by voting with bots, but the clever folks at Digg had dealt with that sort of thing before. By 2027 ScienceDigg had become so embedded that the Nordic Royal Academy used it to choose Nobel laureates.

  12. Stephen Moratti says:

    Rather than a voting system which can be gamed, the simplest thing for any journal to implement is a proper moderated comments section for all papers, with no anonymous contributors. Thus, any weak papers (or strong papers)can be marked as such, and anyone tempted to cite the paper might be deterred by a raft of adverse comments. A large set of negative comments on their papers may also make editors less likely to publish dubious work.
    I myself would mark many synthetic papers with “tried it on X, didnt work”, which might save many people lots of time.

  13. Canageek says:

    Alright, is it blasphemy to suggest that journals might profit from actually adding value? Hire fulltime, better editors that actually help you make the paper sound good? Have full time people who look for plagiarism and such? Or even arrange for key results to be duplicated before publishing them? Am I wrong in thinking there would be a market for a ‘Journal of Chemistry that has been shown to work’? What if they had someone who drew all the structures for you, so you didn’t have seven different styles of drawing in the same issue, or who would turn a set of x-y data into a graph in a house style, so that grad students don’t also have to be typesetters and graphic designers?
    These are all just random ideas, but what if instead of just collecting fees, as they are often accused of doing, journals actually *did* add value in a noticeable way? Or is everyone dead set on a new system? I don’t know; I’m just an undergrad who takes interest in such things.

  14. Nick Johnson says:

    Doesn’t the FDA regulate publication of insulin trials differently than with other pharmacologic agents? I understood that trials related to insulin must be non-inferiority trials in order to be included in the labeling. This would explain why both trials had the same result in regard to primary outcomes, which was likely efficacy measured by change in A1C and or FPG. So if insulin trials by design are intended to show nothing other than similar results in the primary outcomes, then what value do these trials have?
    Wait for it. Wait for it.
    The answer: secondary outcomes such as adverse events, hypogyclemic events, etc. If these studies as measured by the P value are found to be statistically significant (and nobody publishes trials that aren’t) then are the results truly “marginally significant” as the writer of the letter points out?
    And of course the study was paid for by Novo Nordisk. What a silly point. Who else would pay for a clinical trial? I don’t see the FDA writing multi million dollar checks to ensure due diligence on these drugs. Blog comments and letters to the editor merit at least as much skepticism as we
    give to clinical trials. My guess: the writer of this letter is the competitor. Probably french.

  15. anonymous says:

    @14 Insulin trials are non-inferioriy trials because you can’t ethically do a placebo-controlled study for an insulin product in T1DM population. They are designed to show they work as well (with maybe a potential benefit) as currently approved products.

  16. newnickname says:

    @13, Canageek: (1) The ‘Journal of Chemistry that has been shown to work’ is Organic Syntheses (free, on-line). (2) Journals usually specify settings for use in ChemDraw and other drawing programs that authors are supposed to (but do not always) use “so you didn’t have seven different styles of drawing in the same issue”. ChemDraw comes with standard templates for ACS journals, Synthesis, etc. (3) Even tho’ journals often require formatted submissions (using MS Word) — something I consider to be a big waste of time — they still end up typesetting the final articles.

  17. Canageek says:

    @newnickname: How do you get those filled-rings that Derek so hates then? I guess I was mixing up which journals I saw the images in.

  18. newnickname says:

    @17: I don’t know what “filled-rings” you are talking about. Post some examples (references to articles) and maybe I’ll take a look.
    Do you mean benzene rings with a “circle” instead of 3 separate double bonds? In ChemDraw, select the “benzene” tool. Click in the document to produce a 6-membered ring with 3 double bonds. CONTROL-click to place a 6-membered ring with a circle in the middle to represent the aromatic sextet.
    I can’t think of any journal that specifies one style of benzene ring (bonds or ring) over another. It’s left to the author’s preference.

Comments are closed.