Thursday, February 05, 2009

The Halo Effect

The book on top of my desk recently is The Halo Effect, by Philip Rosenzweig. After you've been doing heavy research for some years on a book, you start to see all the threads are cross-connected in ever more complex ways. So, Nassim Taleb provides a cover blurb for The Halo Effect,, and I already lean on Taleb's The Black Swan in my introduction (a grand sweeping attempt to re-view programming in the context of psychology, philosophy, physics(!), and human history), and also used Taleb's Fooled by Randomness in the Psychology of Incompetence chapter.

What does The Halo Effect have to do with the psychology of programming? Actually, the name refers to a psychology study by Thorndike in the 50's, in which he found that superior officers tended to rate subordinates as either good at everything, or lousy at everything. No nuances in between, no people who had both significant strengths and significant weaknesses.

One thing psychology is not so good at is re-integrating its own findings over time. Pyschology researchers go off building up their own particular ideas, and they have little incentive to note how their idea (invariably with its own cute coined terminology) overlaps with others, or with much older research ideas. Starbuck (see below) has insights on this problem for both psychology and social science research. To me, the Halo Effect is pretty much the Fundamental Attribution Error examined in a context of group dynamics, and that is how I will use it.

The first nitty-gritty psychology chapter in my book is Attribution Errors, because I think the Fundamental Attribution Error is really the most simple and influential psychology concept you can learn, especially as applied to others (most of the book is focused on the individual mind, so it's good to get something group-related out at the beginning). Rosenzweig's book gives me a slightly different slant on the FAE and, being at least modestly academic, provides some relevant references into the psych research literature, which I always appreciate for my next research trip to the University of Washington.

Rosenzweig relentlessly dismantles the validity of Jim Collins' uber-popular business books, based on fundamental flaws in his research model. This makes me look to see if Rosenzweig is connected to another thread: William Starbuck. But no, "Starbuck" is not in the index (though "Starbucks" is). Did Rosenzweig really not read Starbuck's astounding The Production of Knowledge, in which he dismantles, not just the research methodology of some popular business books, but the entire field of social science research? OK, well Starbuck was recent, 2006. But Jerome Kagan is not in the index either, and he was pointing to problems with research that relies on questionnaires at least as far back as 1989, in his book "Unstable Ideas". Kagan never lets himself forget that he long ago mistakenly believed (and taught) that autism was caused by the behavior of mothers; he uses the memory of that mistake to maintain a high degree of skepticism about the limits of research.

This is the curse of modern academic life. The sheer volume of ideas produced and published each year guarantees that you will overlook some useful ideas that are highly relevant to your own. All you can do is push your ideas out there, and hope the connections you missed don't completely invalidate your work, and that they will be spotted and put together by someone else.

The flip side of this curse is that it makes possible the modern Renaissance man (or woman) of science. When Richard Feynman made his brief foray into biology, he quipped that he was able to quickly make a contribution because he hadn't wasted all that time learning the names of every little body part, like a "real" biologist has to do. In this world of information overload, the amateur who is willing to plunge into hours of reading just looking for connections can sometimes make a contribution in a field that nominally requires deep expertise.

Thus, just yesterday I find myself, sans medical degree, writing to the author of a medical study appearing in the headlines this week to point out what the experts in his field have overlooked. The headlines were about the discovery that kidney failure patients on dialysis who live at high altitudes do better than those at low altitudes. The renal specialists, of course, imagine that this must be somehow connected to the hypoxia of altitude stimulating more red blood cell production. What I know, that they don't, is that a) it takes more altitude than they imagine to stimulate red blood cell production (that literature lies in sports medicine, which nephrologists do not read) and b) there was a recently discovered amazing effect: oxygen breathing can stimulate EPO, the natural hormone that tells the bone marrow to make more red blood cells.

The trick for me is knowing that nurses will put a dialysis patient on oxygen if their oximeter indicates serum levels are low. Thus, the most likely way that altitude influences dialysis patient outcome is by virtue of the fact that they are getting more oxygen, and their caregivers are unaware that this can stimulate red blood cell production, just like giving them a shot of Procrit.

Of course, as my mother-in-law likes to exclaim "But you don't get paid for any of that!". Which is true, and makes me realize it's time to get back to writing my damn book.

1 comment:

Cate Berlin said...

They do seem the same, the halo and FAE, or at least that is what I decided when thinking about the way we credit or blame a charismatic person running a project, ignoring other factors, including luck. It has been making me crazy, trying to identify the bias. I settled on both. Thanks.