Skip to main content
Menu

Academia (vs. Industry)

Publishing, Perishing, Buying and Selling

Here’s another article in the Guardian that makes some very good points about the way we judge scientific productivity by published papers. My favorite line of all: “To have “written” 800 papers is regarded as something to boast about rather than being rather shameful.” I couldn’t have put it better, and I couldn’t agree more. And this part is just as good:

Not long ago, Imperial College’s medicine department were told that their “productivity” target for publications was to “publish three papers per annum including one in a prestigious journal with an impact factor of at least five.″ The effect of instructions like that is to reduce the quality of science and to demoralise the victims of this sort of mismanagement.
The only people who benefit from the intense pressure to publish are those in the publishing industry.

Working in industry feels like more of a luxury than ever when I hear about such things. We have our own idiotic targets, to be sure – but the ones that really count are hard to argue with: drugs that people will pay us money for. Our customers (patients, insurance companies, what have you) don’t care a bit about our welfare, and they have no interest in keeping our good will. But they pay us money anyway, if we have something to offer that’s worthwhile. There’s nothing like a market to really get you down to reality.

26 comments on “Publishing, Perishing, Buying and Selling”

  1. HK says:

    “It’s well known that small research groups give better value than big ones, so that should be the rule.”
    Is that true, or was that a bit of sarcasm that I missed?

  2. leftscienceawhileago says:

    Hear hear!

  3. TJMC says:

    Derek – I wholeheartedly agree with your point that the best way to measure/guide how we perform Pharma R&D is how well society values the results. Lots of time and possible paths to get there, but hopefully we keep that end goal in our “line of sight”.

  4. Screening for ideas says:

    This begs the question: is the number of interesting new discoveries proportional to the number of groups or the number of people? It is a little like screening compounds. Do you want more scaffolds or more analogs? I think most would agree that more scaffolds are better. Is that true of research productivity?
    Assuming a fixed funding pool,
    If the former, that would argue for more groups and necessarily smaller sizes of groups.
    If the latter, that would argue for bigger groups and necessarily fewer of them. Presumably there is also some efficiency gained in a large group but efficiently trolling through a heavily mined space may not be an advantage.

  5. TJMC says:

    #4 Screen – Great question but there is one more variable: Where can technology best leverage the benefits of each model? For instance, larger groups could benefit by novel screening tools like datamining with semantics, etc. On the other hand, more and smaller groups could take that tech (which may cost a lot less than largr groups), and overcome the advantages of traditional scale in those larger groups.
    On another avenue, could (“really hot area”) collaboration tools improve the performance of decentralized and more diversified groups? Improving the innovation of large homogenous groups?
    In other words, the traditional answers from organizational models can get turned on their head by unexpected innovations, or the possible success of some that are already in use. Look at how steam power, assembly lines or the internet did to so many other business models.

  6. TJMC says:

    #4 Screen – Great question but there is one more variable: Where can technology best leverage the benefits of each model? For instance, larger groups could benefit by novel screening tools like datamining with semantics, etc. On the other hand, more and smaller groups could take that tech (which may cost a lot less than largr groups), and overcome the advantages of traditional scale in those larger groups.
    On another avenue, could (“really hot area”) collaboration tools improve the performance of decentralized and more diversified groups? Improving the innovation of large homogenous groups?
    In other words, the traditional answers from organizational models can get turned on their head by unexpected innovations, or the possible success of some that are already in use. Look at how steam power, assembly lines or the internet did to so many other business models.

  7. MTK says:

    The libertarian/free market person in me says that if the scientific community were truly a free market where the customer pays for the product that the best and most efficient model would eventually reveal itself.
    The dummy in me, however, doesn’t quite know what the product is or who the consumer is. When the renumeration is in the form of future grant funds or tenure that is handed out by folks who may or nay not be the consumer than economic principles are pretty much out the window.
    Anyway, if we want people to publish good work not just any work, instead of publications being a measure wouldn’t citations of those publications (excluding reviews) be a better measure?

  8. SteveM says:

    “Academic politics are so vicious precisely because the stakes are so small.”

  9. In Vivo Veritas says:

    Imperial College of London?
    This guy must be exceeding expectations:
    Steve Bloom (gut hormone guy).
    1572 papers since 1969.
    Wow.
    Now, any of you big pharma types spend any time or effort trying to replicate his results or develop his hypotheses?

  10. biologist says:

    To #4 (is number of real discoveries proportional to number of groups or number of scientists):
    Unfortunately, neither. There have been tens of thousands of scientists and thousands of groups working on cancer, and the result was less than anyone expected. I think it is intellectual saturation: no matter how many people work on a question, ideas come in a certain pattern and each wrong hypothesis has to be falsified. That takes time. In hot areas, major advances are often published in 3, 4, 5 articles in parallel (e.g. HIV). While three might be good for replications, article #4 and 5 are not really needed.

  11. drug_hunter says:

    Let’s start with a journal that many or most of us probably know well — J. Med. Chem. I’d guess that

  12. It’s well known that small research groups give better value than big ones, so that should be the rule.”
    Is that true, or was that a bit of sarcasm that I missed?

    It’s not sarcasm, it’s true. Small, intensely focused groups are usually much better at producing ideas and encouraging free thinking without the bureaucracy and official sanctioning. One of the casualties of “big science” is the gradual hemorrhaging of such groups from the science world.
    Just think of the top twenty scientific discoveries of the twentieth century. How many do you think were made by big groups? (Think “Watson and Crick”)

  13. Hap says:

    Markets are good for things people need or want to have now and that can be measured and compared easily. The outcomes of research are unpredictable and unquantifiable in most cases. Research isn’t (or shouldn’t always be) directed at short-term goals whose progress can be easily evaluated, and it may sometimes tell people things they don’t want to hear. That seems like the kind of task markets would be really bad at managing.
    The problem of publishing counts as a measure of scientific worth probably stems more from grant requirements that simply by publishing – universities and granters want to know whether they’ve hired the right people or given the right people money, and want readily evaluated measures to tell them. Impact factors and publication counts are easy to measure – hence they become benchmarks, even if they don’t mean anything.
    My advisor had lots of papers, and while I’m sure they weren’t all great, many of them were useful. I don’t see why having lots of papers should be embarrassing, unless none of them (or almost none of them) are any good.

  14. Anonymous says:

    Wavefunction – the human genome sequencing is a pretty big advancement, so was nuclear power and putting a man on the moon. Although the last one is arguably an engineering rather than scientific achievement, i’d say it’s fair to argue that big groups can make contributions too. some things just can’t be accomplished by a small group. Just look at drug discovery in general! Now, that’s not to say that small groups aren’t important as well, in fact the NIH intramural program is modeled almost entirely on highly productive small groups that collaborate with each other. Just trying to play devil’s advocate here…

  15. Anon, yes, big groups have their place and can of course play a role in certain cases (think LHC). But as with everything else you can do too much with big science. A couple of years ago Bob Weinberg from MIT who is a cancer pioneer wrote a great article in Cell arguing that the importance of big groups and consortiums has been overestimated in cancer research and that the creative give-and-take of ideas endemic to small groups is being stifled, leading to a dearth of good ideas in basic cancer research. The fact is that science flourishes best when bureaucracy is kept to a minimum, something that’s inherently hard to do in a big group. That does not mean that big groups will automatically stagnate, only that you will have to go to extra lengths to make sure that that strings are loosened. But as far as I can tell this does not often happen and groups like the MRC which produced a dozen Nobel Prize winners through minimal interference with scientific work are rare. Ditto for big pharmaceutical companies. I think we had a post-war era of big science that produced some valuable results. But the pendulum has swung to the other extreme and we again need to push small-scale creative science.

  16. It’s also interesting that you mention nuclear power. The only nuclear reactor that actually made money for its manufacturers was the TRIGA, a model that was designed by a small group of very smart people including Freeman Dyson and Edward Teller (this is nicely recounted in the chapter titled “Little Red Schoolhouse” in Dyson’s book “Disturbing the Universe”).

  17. Anonymous says:

    I didn’t know that – thanks! I was really referring to the Manhattan project, which was one of the models for and big successes of “big science” in the 20th century, in my opinion. I’ll be sure to check out the book.
    I should mention that I pretty much agree with you on the small group model — just didn’t want to discount some of the truly amazing discoveries made by bigger organizations.
    I think in terms of synthetic chemistry, what I hope this would mean is an end of the huge empires of synthetic chemists with groups of 30 studends/postdocs, and an increase in the number of smaller, more nimble and creative groups. The groups i’ve been in have generally worked best at a smaller size.
    I think there’s a critical mass for innovation that’s somewhere around 3-7 people in a group, where you have good institutional memory and core knowledge, but a small enough people that the PI can truly focus on the projects that are ongoing in the lab. But that’s just my 2 cents and what do I know…

  18. TJMC says:

    #15 CWF – I think we tend to paint these issues with too broad a brush. Big groups can succeed and innovate, and small ones can languish. Some of it is the talent there, some on leadership, some on structure and goals/process…
    It seems to me that the process of innovation, cross-pollinating ideas and problems, etc., CAN work in either size group. It just takes enlightened management or a determination to collaborate despite management. And I have seen it work both ways –in large as well as small groups.
    Before some say that “you cannot change (be free) until leadership leads or changes” (something I have heard far, far too many times in the past 30 years), I would look to what some are calling the “Arab Spring” effect. Time, distance and collaborations no longer have to follow traditional paths (what most other commenters are referring to), that we have experienced.

  19. GC says:

    This isn’t much different from “hundreds of lines of code” (CLOC) in software development, where people were scored by the number of lines of code they wrote.
    It took management decades to distinguish the difference between tons of assembly-line churned-out buggy shit that dies the first time it sees production data, and the good stuff that might be only a couple hundred lines that did take 2 or 3 weeks to write, but has no bugs, covers all the corner cases, and isn’t brittle.

  20. Stephen says:

    The proposal from Imperial seems quite modest. If you have a group of say 5 then 3 papers a year, including one good one is about par for a chemist. Even I would be worried if I wasnt getting that.
    Those that have many hundreds of papers are probably in big collaborations (where the effective nuber of students/collaborators) can be up to a hundred.
    Scientists may beleive that if they werent measured so much that they would be more productive. I suspect not. Good scientist will produce good work no matter what. It is only the mediocre ones that fill up the journals with garbage. The key is to use the right metric to analyze performance

  21. drug_hunter says:

    Let’s start with a journal most of us know pretty well — J. Med. Chem. I’d guess that less than 10% of the articles in JMC are worth reading…

  22. Bobby Shaftoe says:

    @15: Wavefunction, a great comment. The only thing you did wrong (other than writing “consortiums” rather than “consortia”) was to omit the reference to the Weinberg paper. I was curious enough to find it at Cell 2006, 126, 9. I’m familiar with some of Weinberg’s stuff, and it is difficult to dispute that he is a beast of a cancer researcher. He makes a very compelling argument in this essay about why our funding balance needs to be reconsidered to reasonably support both “innovative” small groups and “reduction to practice” large groups. It should be recognized that these are two extremes and the usual applicants lie somewhere between on the spectrum. Unfortunately, the pendulum currently resides closer to the data-producing, metrics-meeting, turn-the-cranking large group side of things….

  23. mike says:

    @GC – we got the same thing in medicinal chemistry. It was amazing how many more reactions were run when “number of reactions” was the major criterion used to major productivity. And when the measure was “number of compounds registered”, that number went through the roof. But they were mostly useless reactions, and useless compounds, tons of easy-to-make, meaningless analogs thrown in to pad the numbers.
    As a friend of mine keeps pointing out, “you get what you measure”

  24. MIMD says:

    Derek,
    in blogging as you do (and as I do) for years, you’ve probably written 100 x the amount that the average academic under publication pressure does.
    And what you write is of far more value, for the most part.

  25. Elvesier says:

    Just announced: Elvesier’s list of new chemistry journals for 2012. Looks awesome.

  26. Sili says:

    Ah, Colquhoun. It did indeed sound like him.
    Incidentally, he’s someone who doesn’t have much trust in the free market. His paean to the NHS is touching.

    There’s nothing like a market to really get you down to reality.

    Which is why the Republan Party is so beholden to the ideas that Global Warming and the theory of evolution are Liberal conspiracies?
    Frankly, I don’t like the fact that all research should be subject to the whims of people whose attention cannot see beyond the next financial quarter or at best the next election cycle.
    CERN is not perfect, but at least it’s there. Unlike the Superconducting Supercollider.
    And what market will ever make it profitable to cure malaria or prevent the spread of HIV?

Comments are closed.