speaking of h2o++

It occurs to me, on this 66-degree evening at the end of October, that if the earth is really going to get noticeably warmer, the smart money should be on vacation properties in places that aren’t now vacation spots. You know, building summer houses on the Greenland coast — or maybe a mile or so in, depending on elevation. The Hoboken Marina. Something. There are pretty good projections about what’s going to happen if global temperature rises x degrees, right? We should be looking to see where oil company executives are building bungalows.

tnc nails it once more

From Ta-Nehisi Coates, Penn State and the nationalist impulse”:

Throughout Sandusky’s trial, I’ve thought back to the crowds of students angrily defending Joe Paterno. It’s not that those students were particularly monstrous — on the contrary, it is the normalcy of their behavior, the humanity of it, that amazes. As others have said there’s [a] line between Penn State, the Catholic Church’s scandals, and the scandals among the ultra-orthodox Jews out in Brooklyn. (I hope I phrased all of that right.)

What you see is the human impulse to squelch the rights of individuals for the greater glory of a nation. We can see that even here in America, looking at civil liberties in the post-9/11 era. But in the Sandusky trial it’s boiled down in the worst possible way. The impulse is to be horrified by people defending Penn State’s handling of this, because, at the end of the day, it’s only football. But when football becomes your identity, when football raises buildings on your campus, when you so much relate to the players on the field that their affairs absorb your weekends, then it’s no longer “just football.” You take on aspects of the religious and the national.

As an academic, I naturally have a sensitivity for matters relating to college athletics, and I have predictable biases. At some point we’re going to have to come to terms with how our universities do business — and in this sense big sports are continuous with big research, lucrative pursuits at best orthogonal to what everyone knows is the core mission of universities, the education of students.

That elision conceals a lot of important differences, of course. But I think it’s an interesting insight, and new for me, so I’m going to let it stand for now.

graph theory, part deux

At risk of turning this blog into “the fMRI graph theory analysis papers” (which would probably attract more readers) here are a couple of better renderings and/or conceptions of the default-mode and task-positive networks. I’ve included only edges that represent significant correlations across subjects — the first as quantified by t-test on Fisher-transformed correlations, the second as quantified by Wilcoxon rank-sum test on raw correlations. I’ve also used a layout scheme that tries to capture the proximity between nodes.

Parametric:

Nonparametric:

The parametric and nonparametric edge definitions yield pretty much exactly the same organization, with DMN and task-positive networks highly intraconnected (is that a word?) but sparsely interconnected. Both approaches also capture an isolated subnetwork in bilateral parahippocampal cortex and accurately ostracize the cerebellar ROI, which isn’t actually part of the DMN or task-positive networks — it was supposed to be posterior cingulate,but I messed it up.

Another slightly subtler feature captured by both approaches is the particular inter-network edges, with positive connectivity between the DMN superior frontal ROIs (R/LSF) and task-positive dorsolateral PFC (R/LDLP) on the same side, and between the DMN “parietal” ROIs and task-positive ipsilateral loci in the intraparietal sulcus. Edited after I realized I posted the same image twice: The parietal connectivity is consistent across correlation metrics but the prefrontal connectivity isn’t. It looks like there’s some symmetry in the medial prefrontal and inferotemporal connectivity within the DMN as well; the medial prefrontal connectivity is also symmetric in the parametric graph but not so much in the nonparametric graph.

Still need to work on rendering edge weights. But these graphs are much nicer.

graph theory raises its ugly head

For the last couple of weeks I’ve been working not quite as hard as I should have on a graph-theoretic analysis of some resting-state fMRI data. Thanks to Brian Avants and ANTS, I’ve generated the following average connectome for default-mode and task-positive networks:

The key thing here is that connections between nodes of the same color are overwhelmingly red (positive correlations, significant across subjects) and connections between nodes of different colors are overwhelmingly blue (negative correlations). So the default-mode network and the task-positive network are correlated with themselves and anticorrelated with one another. This is not a shocking result (see link), but it’s fun to verify in my own data with new technology. There’s something attractive about a graph theory approach to functional connectivity that more sophisticated super-data-driven approaches like ICA just don’t have — maybe because people actually have some vague sense of how to think about and analyze graphs. (For “people,” you can probably substitute “Matt” with no particular loss of accuracy.)

Next: nonparametric approaches to edge analysis, visualization tweaks, and (most importantly) between-groups analysis of the effects of electrical brain stimulation on the connectome…

misuse of the word “love”

I was coding up a Python script to do some data analysis, and I accomplished with some not-all-that-clever list comprehensions what would otherwise have taken a few lines of for loop to accomplish. And I was pleased, and quoth unto myself, “Gosh, I love list comprehensions.”

Then I thought, “Wait, am I just using list comprehensions just to accomplish in Python what I could accomplish in Matlab with clever indexing tricks?” And I looked upon my code, and it was so.

This is not necessarily a win for Matlab — list comprehensions presumably have more general uses than indexing tricks, and the only reason I have ever bothered to use them is that I’m used to being able to grab sections of lists with one-liners even when the way I’m subdividing those lists is a little complicated. And given that I can do what I want with one-liners in either setting, Python’s overall neatness and superior text-processing utilities give it the win. In this context, which is perhaps retrospectively obvious, since what I’m doing is mathematically light and text-processing heavy. But I spent so long using Matlab as default general-purpose programming language that these things still strike me from time to time.

great faculty job search link

Note to myself and others: Penn’s office of career services has an incredible collection of materials relevant to faculty job applications. They also, perhaps more awesomely, have copious materials related to the non-academic PhD job search.

Sorry — no angle on this, just genuine appreciation. Off-color humor and half-baked analysis will no doubt return in the next blog post.

the real power of pop economics

Nice insight from Andrew Gelman:

I think the real power of pop-economics as a tool for explaining life is that it has two opposite forms of explanation:

1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.

2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.

Argument 1 is associated with “why do they do that?” sorts of puzzles. Why do they charge so much for candy at the movie theater, why are airline ticket prices such a mess, why are people drug addicts, etc. The usual answer is that there’s some rational reason for what seems like silly or self-destructive behavior.

Argument 2 is associated with “we can do better” claims such as why we should fire 80% of public-schools teachers or Moneyball-style stories about how some clever entrepreneur has made a zillion dollars by exploiting some inefficiency in the market.

The trick is knowing whether you’re gonna get 1 or 2 above. They’re complete opposites!

It’s like Freudianism: if a person does X, that’s because of trauma Y that occurred early in life. But if the person does not do X, that’s also because of Y, it’s just that this time it’s repression. You can explain anything.

Theories that can explain anything are not necessarily useless. They can give understanding and point the way to further study. But it’s good to recognize ahead of time that the story could go in either direction.

This is, of course, no less applicable to psychology than it is to economics. And there probably is a unifying account — something like “people are rational given the information they have and their valuations of outcomes.” But of course that makes things kind of hard to specify.

If you buy that unification, here’s an interesting follow-up: Does it require us to prepend “neurotypical”? Or can we describe the decision-eroding effects of brain damage and mental illness purely in terms of information and valuation?

sad dads spank kids (?)

NPR says: Pediatricians Need To Help ‘Sad Dads’.

The report says that 41% of depressed dads spank their kids, whereas only 13% of non-depressed dads do so. That’s a gigantic difference, right? Think of how many spankings we could prevent if we could just make depressed dads spank at the rate of non-depressed dads. Right now a total of 15% of dads spank. So if we cured all the depressed dads, the rate would plummet to…

… wait for it…

… 14%!

Right. The report doesn’t tell you that only 7% of dads are depressed. You need to look at the abstract for that. So instead of contributing somewhat more than 2% of total spankings, as they currently do, spanking-normalized depressed dads would contribute just under 1%.

You could tell me that any spanking is a bad spanking, and I wouldn’t really be able to contradict you. I don’t have any idea how bad corporal punishment is. However, I do know that reading to kids can provide a giant boost to their intellectual development — and the study (but not the report) shows that depressed dads are much less likely than non-depressed dads to read to their kids regularly.

I’m no fan of spanking, but I’m a lot more worried about kids becoming dumb.

(Yeah, I’m aware of the irony of invoking THIS AMERICAN LIFE in an attack on the scientific foundations of an NPR article. Here are some scholarly articles on the topic if you want to chase them down.)

ETA: I posted a similar but somewhat better written analysis at Partial Objects.

the practical significance of statistical significance

That’s a leading headline, so let’s be clear: It’s not zero. But the question is, can a difference that’s significant at p<0.05 support a lede like this?

THE GIST The rich don’t get how the other half lives.

The answer, of course, is “it depends what you’re measuring.” So what’s being measured here?

The NYT article, by Pamela Paul, is a summary of a recent paper in PSYCHOLOGICAL SCIENCE by Michael Kraus, Stephane Côté, and Dacher Keltner. The full text of the paper is available to the general public on Kraus’s Web site, so you can follow along if you want. This seems like as good a place as any for the usual caveat: I’m a cognitive neuroscientist, not a social psychologist, so my reaction is a product of, and functions on the level of, fairly broad-spectrum instincts and not specific expertise in the psychology of poverty or empathy.

As Paul mentions in the NYT article, Kraus et al. conducted three experiments. The first examined the relationship between educational attainment (used as a proxy for social class) and an emotional intelligence test that requires subjects to identify emotional expressions in faces. The second examined the relationship between subjective (i.e. self-reported) socioeconomic status and accuracy at judging the emotional expressions of an interlocutor. The third examined the relationship between a manipulation of perceived SES (instructions to think about oneself compared to the best-off or worst-off people in the USA) and a measure of empathic accuracy similar but not identical to the one in the first experiment. In each case, the experimenters found that subjects with lower SES, either actual or perceived, had higher empathic accuracy relative to those with higher SES.

So far, so good. Let’s look at the data.

HS-educated people have higher empathic accuracy than college-educated people.

People made to feel lower-class have higher empathic accuracy than people made to feel higher-class.

Don’t look at the SEMs. They say what the authors say they say. Look at the scales.

The empathic accuracy scale used in Experiment 1 is normed to a population mean of 100 and standard deviation of 15. They report each population’s mean in the text. The high-school-educated subjects had a mean empathic accuracy of 106.02; the college-educated subjects had a mean empathic accuracy of 99.40. This is a real difference; there are lots of subjects, so the estimate of the means is pretty good. But we’re talking about a difference of 7.62% on a test with 20 items, which I’d eyeball as about an item and a half. It’s also notable, by the way, that the college-educated subjects are not lower than the normed mean, they’re indistinguishable from it. It’s the high-school-educated students who are higher than the mean. There are any number of reasons for this, most obviously that their measure of empathic accuracy may have been normed on college students. What’s striking to me is that this feature of the data isn’t mentioned at all.

As you might have guessed, a similar analysis on Experiment 3 yields a similar result. The measure of empathic accuracy in Experiment 3 has 36 items; participants with lower manipulated social class got on average 27.08 items right, while those with higher manipulated social class got 25.23 right. This is a difference of 5.14% accuracy, even smaller than the difference in Experiment 1.

Please note: This does not mean the work is uninteresting, insignificant, or wrong. There are other features that make it significant, notably the authors’ claim that this is the first experiment to manipulate participants’ perception of their own social class. And, as far as I can tell, the differences are real. I wouldn’t believe them until they’re replicated (Jonah Lehrer has a very nice article on the decline effect that’s worth reading for anyone concerned about scientific epistemology), but that doesn’t mean the paper wasn’t worth publishing.

But it does mean that it is pure bullshit to say that this paper is evidence that “The rich don’t get how the other half lives.” It’s evidence that the rich are slightly worse at perceiving other people’s emotions — whether those others are rich or poor. This isn’t inconsequential; it’s new knowledge. But in terms of revising your own policies, that “slightly” is critical. Note that, in other contexts, very large differences between populations are not viewed as good grounds for treating individuals differently.

So far I’ve mostly criticized the popular interpretation of the science, not the science itself. Where I do take issue with the authors is not in the text of their paper, but in Paul’s NYT article. There, we find the following:

“Upper-class people, in spite of all their advantages, suffer empathy deficits,” Dr. Keltner said. “And there are enormous consequences.”

Keltner may be right, but let’s be clear: This article is evidence for empathy deficits, not “enormous consequences.” His sense that there are “enormous consequences” is what led him to do the experiment. Contrast his claim in the NYT to what he says on this topic in PSYCHOLOGICAL SCIENCE (emphasis mine):

Empathic accuracy may mediate influences of class on relationship quality, commitment, and satisfaction. It is also interesting to speculate about the costs of heightened empathic accuracy for overall health and wellbeing, particularly because lower-class individuals tend to experience chronically elevated levels of negative emotion and negative mood disorders (e.g., Gallo & Matthews, 2003).

Maybe Keltner was quoted out of context. He might have other evidence for the claim he made in the NYT. But if he did, why didn’t he mention that evidence in the paper?

Anyway, this has been a rehearsal of fourth-grade science: Keep your eye on the y-axis. My experience teaching neuroscience at Princeton would suggest that this is harder than it sounds, but it pays off.

(Also, as a matter of random derision: While Paul is right that the paper doesn’t identify the university that provided the subjects, it’s almost certainly Berkeley — Kraus only left Berkeley for UCSF a couple of months ago, and the subjects in Experiments 2 and 3 are 50% Asian, so it’s not Toronto.)

Thanks to Adrian Arroyo for forwarding me the NYT article, and to Liz Fuller for the Jonah Lehrer article on the decline effect.