A friend of mine recently emailed me saying he’d been invited to lunch with Y. S. Chi, the Vice Chairman of Elsevier, under the auspices of talking about how scientists share knowledge, how technology has changed knowledge sharing in recent years, and how Elsevier might direct its future operations in that light. (Elsevier publishes most scientific journals, at least in psychology and neuroscience; notable exceptions include but are not limited to Science, Nature, the Journal of Neuroscience, and the Journal of Cognitive Neuroscience.) He wanted to know if there was anything his more loudmouthed friends had to say to Elsevier. My response, more or less intact, follows:
I strongly believe that any research paid for by taxpayers should be freely readable by taxpayers. This has nothing to do with Elsevier as long as no one else believes it, but it’s my vague impression that government granting agencies like NIH are starting to rumble about access as well. Honestly, I also think taxpayers would be a little bit disgusted by how much of their money goes to page charges, graphics charges, etc., if they knew (although they might be willing to accept this as the cost of doing business if they could read the articles). And, for myself, I’m always a little bit floored by those shelves of paper journals that we still get every month in the psychology library. I believe without proof that the costs of dead-tree printing and the ridiculousness of the fees required to print in journals are strongly related, but presumably Y.S. is the authority on whether that’s true; in any event, I’m certain that Elsevier is aware of the fact that almost no one consumes its physical product any more and is taking steps to accommodate it.
Despite the fact that it seems like the current scientific publishing mainstream is doing a lot of obvious things wrong, it’s not clear that there is a good alternative business model. My understanding of the PLoS journals’ situation is that the business model was only going to work as an iceberg — a few selective journals borne by the revenue-generating potential of journals like PLoS One, which makes a point of its inattention to anything other than scientific soundness (i.e. the results can be believed, even if they’re not novel, interesting, or informative). And it seems like the model still isn’t working, possibly because people don’t want to publish in a journal that will accept anything. This was rehearsed in a recent issue of Nature. I don’t know anything about the Frontiers series, though — maybe they’ve got it right.
Having said all this, I don’t really have any prescriptions because I don’t know how the whole enterprise currently sustains itself. It is a little bit shocking to me that fiction magazines can afford to pay their editors and their writers while scientific journals can barely afford to pay their editors even while they’re paid by their writers; there may be a difference inasmuch as the reader:writer ratio is much higher in fiction magazines, but I actually would bet this is not the case (I would love to be published in the IOWA REVIEW, but I don’t read it; I at least read the table of contents of every journal I would love to be published in. Being a working writer isn’t as contingent on knowing the state of the field as is being a working scientist. On the flip side, being a working writer has almost nothing to do with publishing short stories — you make basically no money from them — but being a working scientist has everything to do with publishing papers.) I’m prepared to accept that there are good reasons for this, but I would like to know what they are before making any pronouncements about what Elsevier ought to change.
Does it make any sense to try to make scientific papers more bloggable? If you’re trying to get compensated by pay-per-view ads, maybe the the right way to try to maximize revenue is by encouraging shorter papers, maybe even actively encouraging a given lab to submit a sequence of papers to you, e.g. by establishing some sort of expedited review for a given research project after the first accepted paper. (“OK, you got all these great results with OLR; if you send us a paper benchmarking it against Leabra and backprop, we’ll send it to the same reviewers and we’ll only send it back to you if there’s a big problem.”) Better yet: Some reviewer suggestions can be incorporated into subsequent papers rather than the current paper. (“The reviewers like the work, but they really want you to benchmark against Leabra and backprop. We’ll publish what you sent contingent on a benchmarking study submitted in six months; if you don’t pony up, we’ll put a goatse up instead of the original paper and no one will ever speak to you again.”) It’s not immediately clear to me whether this system imposes perverse incentives, though, or rests too much on a buddy system between authors/labs and editors.
I could say more about the publishing culture in science and how there’s too much research that isn’t especially novel or comes out too half-baked to be really useful, or even comes out wrong because it’s better to plant your flag than to get it right… but none of that is especially new, original, or helpful. Honestly, I think my own attitude toward the whole thing is inordinately influenced by my own desire to not have to read so much.