I just attended a talk by a very famous quantitative psychologist. It was a good talk, all about modeling some fairly fine aspects of the interactions between information and reward in a decision-making paradigm, and there was a lot of mathematics in it that will forever elude me. Since I don’t do this kind of work and am not well-equipped to understand it, I am probably undersensitive to how hard it is. Owing in part to these deficiencies, I’ve always appreciated Seth Roberts and Hal Pashler’s points about model-fitting (here’s a relevant post from Seth’s blog, referencing a classmate of mine from a neural networks course), and in particular the point that fitting a lot of data with a few free parameters isn’t always the coup it seems to be. I think there’s an issue of denying the antecedent at work. Everyone knows that, if your observations aren’t much more numerous than your parameters, a model fit is unimpressive; the easy inference is that, if your observations are much more numerous than your parameters, a model fit is impressive. But that’s invalid and, more to the point, not always true. Roberts and Pashler bring up the point that “psychological data is often not surprising”; in many situations, including the ones addressed in this talk, it’s predictable that the data will take the form of, say, a logistic function. (In fact, I’d guess that psychologists tend to prefer experiments that can be easily modeled by simple functions.) If there’s a strong prior for a logistic function, the fact that you can fit a lot of data with a couple of parameters is not impressive; all it shows is that you know how to fit a logistic function.
I appreciate Seth’s admonitions to seek out surprising predictions, but I think there’s also a place for this kind of fine-scale modeling of unsurprising predictions. The problem, in my experience, is that the people who are really excellent at modeling this stuff are generally interested in the modeling to the exclusion of the really interesting next step, which is figuring out how to fix those free parameters. Is there some physiological or psychological variable that you can plug into your equation and make the model work? After all that math, I’d think this sort of inquiry would come as a welcome relief. More, it’s a a way to connect with a new audience. Mathematical non-initiates aren’t likely to be engaged by the fact that you can fit data from some simple task using a lot of math — but if you can fit it using a lot of math and, say, extroversion, suddenly there’s a whole segment of the population that might prick up their ears.
There are, of course, plenty of other good ways to strengthen a model fit’s endorsement of a psychological hypothesis — cross-validation is an obvious one, and I imagine Roberts and Pashler and others have mentioned more. But this is one that seems paradoxically unpopular, since it’s both principled and cool.