Back in 2013, I mentioned the “JACS Challenge”, an interesting attempt to see if papers that eventually got cited a lot were obvious prima facie. Given a selection of older papers from the journal that readers were unfamiliar with, could they pick out the ones that ended up getting cited more?
Now this work, revised and expanded, is the subject of a paper in PLOS ONE (open access, by definition), and the author line features some blog pseudonyms, interestingly. A 2003 issue of JACS was selected, and respondents were asked the following questions:
- Which three papers in the issue to you think are the most ‘significant’ (your own definition of ‘significant’ is what is important here)?
- Without looking up numbers, which three papers do you think will have been cited the most to-date?
- Which three papers would you most want to point out to other chemists?
- Which three papers would you want to shout about from the rooftops (i.e., tell anybody about, not just chemists)?
I am glad to say that I seem to have skewed the set of responders a bit, since it appears that many of the people who answered the survey were readers of this blog following a link. Looking over the papers that were suggested, it seems that the correlation between the first set and the third (significant, and should be shared with other chemists) was pretty strong, as you might think, but that correlation between “significant” and “will have been cited the most” was somewhat weaker.
In fact, the correlation between what respondents thought would be the most cited articles and the actual ten-years-later citation counts was quite poor (see the paper’s Figure 1). Looking at Figure 2, you can see that none of the other questions, in fact, correlate well with the real citation counts (I would be rather unhappy if these graphs represented project assay correlations!) Of course, it’s also true that the respondents disagreed pretty significantly about which papers were significant in the first place. That’s strong evidence that the survey set was indeed composed of practicing chemists, because we rarely agree on much of anything.
Are there indeed differences between “interesting, thought-provoking” papers and ones that you feel like telling other chemists about? Or between those and the ones that pick up citations? These data, though far from comprehensive, suggest that both of these may be true (as does ones own intuition, for what that’s worth). The paper tries to correlate responses to the reported areas of specialization of the responders, and there may well be something to that. For example, this paper was cited 325 times over the next ten years, but only five responders to the survey picked it as one that would get cited. All five of them, though, indicated that they specialized in this general area.
On the other hand, this paper was selected by many survey responders as one that would pick up citations, but its actual ten-year citation count was average-to-modest. My own guess here (I may have picked this one myself!) was that the title sounded like something “hot”, by current standards, and it surely would have been picked up on. But that shows you the peril of such gut feelings.
What this does is drive yet another nail into the idea that current publication-based measures to evaluate research importance, quality, and impact are much good at all. They aren’t. The paper finishes up on just this point, referencing several other initiatives that are trying to overturn citation counts, h-indices, journal impact factors and so on. These things are measures, sure enough, but they’re not necessarily measuring very much, and not necessarily what some of their users think that they’re measuring!