Niko Kriegeskorte, a vision researcher and fMRI methodologist at the NIH, has started blogging his meditations on the future of scientific publishing (HT Ken Norman). The Last Psychiatrist provides commentary similar in content but different in character; Kriegeskorte is a utopian on this topic, and TLP seems to view himself as a Neroid figure. (Me, I’m listening to Lily Allen sing about her ex-boyfriend’s tiny penis.) Anyway, there are some comments I’d like to pick at in each.
NK: “The fact that reviews are public makes reviewing a more meaningful and motivating activity… The power lost [deciding a paper’s fate — mjw] is the kind of power that corrupts, the power gained [providing constructive public commentary — mjw] is the kind of power that challenges us in a positive way.” I wonder, only somewhat cynically, if the latter sentence in that quote doesn’t contradict the former. If I’m deciding a paper’s fate, I feel obliged to be careful in my review; if I’m one voice in a chorus, I might be less motivated to be thorough, if I’m motivated to weigh in at all.
TLP: “Journals are the rating agencies, Moody’s, they keep it sustainable by giving it AAA rating. The ratings agencies are precisely what keeps the bubble inflated, just like with the mortgages, they are what keeps research money pouring into the system.” There’s something to this — when someone on a grant review panel doesn’t know the people on a given application, their publication records are an obvious thing to look at, and of course there are various less direct ways that publication record and money are related, like tenure calculations and the ability to impress people by saying “author of x scientific papers” in your bio. The question is, what’s the better metric? Of course, any grant application contains a more obviously relevant component, namely the research proposal — presumably an excellent proposal ought to outweigh publication records; past performance doesn’t always predict future performance. The counterargument is obvious: Research is expensive; labs that have proved they can deliver deserve an advantage. The counter-counterargument is almost as obvious: Publishing isn’t “delivering.” Creating knowledge is delivering. The two aren’t necessarily all that related, although maybe some of Niko’s proposed measures would bring them closer together.
NK: “… journal prestige as an evaluative signal is compromised by circularity: Prestige derives from journal impact factors, which in turn depend on citation frequencies. Since a paper published in Nature will be cited more than the same paper published in a specialized journal, prestige –- once acquired -– creates its own reality.” I don’t have much to add to this, but it’s kind of fantastic coming from Niko, since he’s so sensitive to issues of circularity in fMRI research…
TLP: “If someone could look behind the ratings, and take measure of the actual value of the research, the bubble would pop faster than, well, you get the idea. Then there’s the ‘systemic risk.’ Journals collapse, academic centers collapse from lack of funding, Pharma loses the AAA rating on their studies which are done by academics, published in journals, etc.” The prospect of this happening is totally fascinating to a junior scientist who doesn’t yet have a career stake in the entrenched system; anything that increases the likelihood of not being forced to run a big lab to survive in cognitive neuroscience is interesting to me. However, I’m not sure how much sense the meltdown scenario makes. The analogy with AAA ratings on toxic assets is limited because — and this is a problem, but it’s not the same problem — very few people are qualified to assess return on scientific investment. Toxic assets are a problem because people thought (still think?) that money was going to emerge from them when there wasn’t any money there, and they claimed to have wealth based on that assurance. No one’s securitizing scientific grants as though they were debt; no one’s writing down future scientific knowledge as an asset. Maybe we should be, but we’re not.
I may have taken the analogy too literally; for science to melt down, it doesn’t have to take the financial system with it. The point is, if whoever sets the budget for the granting agencies decides that return on scientific investment is insufficient, science funding will contract, which will be destructive, albeit possibly in a sort of cleansing or cauterizing way. What’s not clear to me is that the esteem of funders for science in general rests on the “AAA ratings” provided by journals. The esteem of funders for specific investigators might — and the demise of journals might be salutary for that reason, allowing more investigators to work at a more modest funding level or rewarding more speculative or diverse lines of work. (Alternatively, it would remove a valuable source of signal to funders and degrade the quality of scientific research by giving more awards to the undeserving. Niko, TLP and I are all skeptical about that, but we might not be right.)
Anyway, enough of that for now; I’m supposed to be writing my dissertation, or at least sleeping…