Saturday, February 21, 2009

Me, The Jury

In every one's life, there comes that day we all dread but must accept. I'm talking, of course, about the day you can no longer dodge jury duty. On the one hand, my civil duty meme says it's good to pitch in and play my part for society. On the other hand, my cynical gene says there's no way any prosecutor is going to put me on a jury, and every day I sit in district court is another day the lovely profits from this book I'm writing are postponed. On the third hand, my stay-out-of-jail meme says that the federal court system is a lot less tolerant of jury dodging than the state, so that pretty much seals the deal!

  • + It's a federal jury, raising the odds we'll be stickin' it to the man rather than trampling the downtrodden.
  • - O.J. is off the streets, so it's unlikely to be the trial of the century.
  • +The courthouse provides free internet access
  • -Have to get up at the butt-crack of dawn to catch a bus and be there by 8:00am.
  • +Serving will allow some hard-working employed person to continue to be productive.
  • -There are plenty of freshly-unemployed people who could do it so that I can continue to be productive.

Of course, anyone can get out of jury duty, using well-documented means, but my civil duty meme has won out and I'm resigned to going and making the best of it. In fact, given that $40/day represents a serious bump up in my income, my answer to the question "Is there any reason you cannot serve?" becomes "No, in fact, if there's any way I can serve on three or four juries at the same time, I would like to sign up for that!" Sure hope I don't have to pay income tax on this windfall.

Technically, of course, I shouldn't get seated on a jury. Once the prosecution sees I listen to NPR, oppose the death penalty, graduated college, suspect televised wrestling might have a predetermined outcome, question the existence of free will, and so on, s/he would have to be nuts to accept me. But it's a human, and therefore chaotic process, and each side has a limited number of get-out-of-seating-that-nutjob-free cards, so as Billy Joel says "Sooner or later it comes down to fate." and getting to sit for the whole 2-3 week trial cannot be ruled out. I might as well be the one. It's a matter of trust.

But probably what really keeps me from trying to dodge my duty is that I've been immersed in psychology for years now. The courtroom is jam packed with psychology. That is, of course, where Dr. Phil made his bucks, leading to his meeting Oprah and his opportunity to start making megabucks. If a defendant has enough money, there's going to be a psychologist on their side studying how to sway the jury.

Once upon a time, I was an expert witness in a Microsoft trial, and psychology was key to my testimony. I was there as a magazine editor to testify merely on some peripheral point about reverse engineering. But at some point during the preparation, it came to the attention of the attorney that there was a connection between a columnist of mine who happened to work for Microsoft, and the actual case. The attorney asked if he could ask about this, I said I would ask the Microsoftie, the Microsoftie said "please, please don't" so I relayed the negative back to the attorney. No big deal, he said, not that important anyway. Well and good, except just before I walked into the courtroom, the attorney said he wanted to walk me quickly through the questions he would ask, and for me to pretend I was on the stand and under oath. He stepped through the questions and then, suddenly interjected with "Is it true that [insert just the question we had previously agreed he wouldn't ask.]?". I stared back at him and, without missing a beat, said "No."

He was using psychology, the pressure of the situation, the idea that I was pretending to be under oath, to see if he could get what we had already agreed I would not offer on the stand. I saw exactly what he was doing and would not go along. Would I have actually gone ahead and lied under oath? Just as in poker, those were cards in my hand that the attorney would have to risk something to see. Since it was, after all, a minor point, he had the good sense not to ask a question whose answer he couldn't be certain of. It's got to be more fun sitting in the jury than sitting on the witness stand--you can snooze a little.

So if you're in U.S. District Court in Seattle during the 3 weeks starting March 23, stop by and see if I'm sitting in a jury somewhere. But please, no lighter salutes, it's just not safe with all that wood paneling

Tuesday, February 17, 2009

They Died, Died, Died

Of all things, the influenza pandemic of 1918 has provided a number of interesting little psychological examples in various parts of my book -- everything from demonstrating that we manage "information workers" the same as ditch diggers to showing that much of the personality testing field is little better than astrology. Events of massive death are always going to produce lots of psychological effects we don't see elsewhere, I suppose.

On my local/government access cable channel, a little show periodically goes by consisting of local medical/government officials sitting in a room talking about planning for the next influenza pandemic. While most of their constituents are watching "ER" or "Desperate Housewives", these people they've never heard of are discussing how to decide who will live and who will die (due to rationing of ventilators), what military options there will be to enforce quarantines, what businesses they may have commandeer to have space to separate the just-waiting-to-die crowd from the still-might-survive folks, how to handle the potential number of corpses that will exceed current mortuary capacities, and so on. It's surreal, but surreal like a tsunami -- you can talk about it and plan for it, but most folks won't take it seriously until it hits and it's too late.

Cheery Little Musical Memes

Hearing me talk about the Pandemic periodically over the months, my wife recently started humming a little ditty she claimed was about the Pandemic, where the lines tended to end in "and they died, died, died". I had never heard of such a thing, and couldn't believe that such a song could span the period from those with strong memories of 1918 and today. But sure enough, she finally dug it up, and here is a Youtube rendition. This tickles my curiosity, because I'm currently mining the literature of "memes", and of course I'm aware that some believe the little song named "Ring Around the Rosie" is about bubonic plague, though others argue it cannot be. Is it possible that worldwide plagues generate memes in the form of music to be passed down?

As with much thinking about memes, this can quickly lead to mushy thinking. But there may be some merit. This meme's survival advantage is easy to allege: those "infected" with the meme are more likely to remember the seriousness of the last plague, take news of a new plague more seriously, and therefore take steps to survive and be in a position to pass the meme on. It's interesting that "The Flu Pandemic Song" contains substantive information about the plague, including its virulence and modes of transmission.

In Gregory Benford's "Deep Time", he ponders the problem of leaving a message ("Stay away! We dumped our nuclear waste here!") that can span thousands of years successfully. It turns out to be a difficult task, for which we have few successful examples. The "Ring Around the Rosie" example suggests memes just can't usefully span that length of time, since we can't agree on what it means. However, the Christianity meme has made it 2,000 years, and though Jesus would surely be shocked at the difference between Evangelical American Christianity and his own teachings ("Let me get this straight -- thou thinkest I would support war and the death penalty?"), clearly some of his original memes have survived in at least a vaguely recognizable form.

Software and Memes

Of course, the reason I'm studying memes is to see whether I can say anything useful about what they have to do with software and psychology. Mostly, I see roads I don't care to go down. Yes, viruses and computer viruses exhibit some shared behavior, as Dawkins recites. Yes, we can simulate evolutionary algorithms with software, just as we can simulate most anything with software. Yes, the right software could be viewed as a "replicator" in the evolutionary sense, and maybe it will take off when the Singularity gets here and we can all download our minds into machines (though surely some unlucky souls will be assigned toaster duty!). Yes, this could all turn into scary stuff to think about (and it is presumed that Skynet will kill Steve Talbott first).

But none of that interests me. I'm an engineer and what I'm looking for is whether the hackneyed idea of memes and the simple evolutionary algorithm that underpins it has something practical to tell me about creating better software.

Friday, February 13, 2009

What is Life?

That's the title of a book based on some lectures physicist Erwin Schrödinger gave back in February of 1943. Imagine this. Fermi had just got his atomic reactor going in Chicago a couple of months earlier, Britain's been devastated by bombing, America has entered the war but it still looks like Hitler just might end up ruling the world, and in the midst of this chaos, Erwin Schrödinger is pondering the nature of life by thinking about how cells work.

Remember that this is well before Watson and Crick, and though funny-looking things called chromosomes had been located inside cells, nobody (let alone a physicist) really knew where the genetics were, where the cell was hiding those traits that could almost magically be passed down from one generation to another. It was a perfect opportunity for Schrödinger to make a bunch of prognostications that would soon prove foolish. But he really didn't -- he was amazingly prescient in his analysis of the nature of life.

Hands Off My Gene Splicer!

Physicists are almost irresistibly attracted to biology. Leo Szilárd, co-patenter of the nuclear reactor and the guy who got the Manhattan Project going by warning Roosevelt about nuclear fission, eventually switched to biology, and designed the radiation treatment he used to successfully treat his own bladder cancer. Richard Feynman dabbled briefly in biology just for fun. Roger Penrose tries to find quantum spookiness in the brain that will keep us from just being meat machines.

I think physicists are attracted to biology because there is a great race going on, much like the building of the Transcontinental Railroad. Physicists are exploring reality from the bottom (particle physics) up, while biologists are driving from the top (living tissue) down. Sooner or later, they are going to meet somewhere in the middle, and physicists really would rather not see biologists be the ones to drive that golden spike in to nail down our understanding of how life works.

Like the Second Coming of Christ, it's hard to prove this momentous event might not be just around the corner, so periodically physicists will make a little run at the problem just to keep their team in the game. And the 1940's were heady times for physicists -- they were bustin' atoms apart for the first time in human history! So it's entirely understandable that Schrödinger, sensing we were close to big progress in understanding cells, would want to take a hard look to see what physics could say about the situation. (His little essay would end up influencing both Watson and Crick in their search for the genetic code less than a decade later.)

Enter Psychology

How did writing a book on psychology and programming lead me to reading 60-year-old physics lectures? It's because my book starts by trying to understand the fundamental nature of what computer programming is, and how it fits into human history. Martin Seligman, founder of the Positive Psychology movement wrote a comment about a book by Robert Wright, called "Nonzero: The Logic of Human Destiny" that piqued my interest. In that book, as he considers human history from the perspective of energy usage, Wright refers back to Schrödinger's essay. And since I have the luxury of writing a book with no deadline, I cannot resist hopping the bus to the University of Washington to read Schrödinger's own words. What do I find there? Descartes once more.

I was just writing about Descartes and his pesky question: is the mind something separate from the body? After pondering the nature of cellular life, Schrödinger eventually cannot avoid making his own pronouncement on Descartes' question. To understand why anyone would care, you have to remember who this particular physicist was.

Schrödinger almost single-handedly put the "spooky" into quantum physics. With one relatively simple equation, he both explained observed results and, like Descartes, raised philosophical questions that remain unanswered today. What Schrödinger offered was an equation that both explained available data stunningly well, but did it by describing matter as a wave. What does it mean that matter can be described by a "wave"? That is still being argued today, but it certainly means that spooky stuff we can't really grasp happens when you get down to the tiny world of sub-atomic particles. One extreme extrapolation of Schrödinger's useful, accurate, but spooky wave equation is the idea that every little thing that can happen, does -- and causes yet another split into an infinite number of parallel universes. Some grown men believe this could be true. Honest.

For the holdouts who still hope that the brain isn't just a meat machine, that there is something special about "consciousness" (as though anyone agrees on what that word actually means!) that will make it impossible to create machines that are "alive", quantum spookiness is one of their last, best hopes. Schrödinger's physics offers a spookiness so rich, and full of bizarre possibiities, that it's hard to absolutely rule out (though most physicists think it bunkum) the possibility that "consciousness" (whatever that is!) is some special phenomenon woven into the very nature of reality, and therefore not something we will be able to recreate by simply reverse-engineering the neurons of the brain. What would Schrödinger have said about this? Fortunately, we don't have to wonder because he had already pondered the question more than 60 years ago. Here's exactly what he said:


According to the evidence put forward in the preceding pages the space-time events in the body of a living being which correspond to the activity of its mind, to its self-conscious or any other actions, are (considering also their complex structure and the accepted statistical explanation of physico-chemistry) if not strictly deterministic at any rate statistico-deterministic. To the physicist I wish to ephasize that in my opinion, and contrary to the opinion upheld in some quarters, quantum indeterminacy plays no biologically relevant role in them, except perhaps by enhancing their purely accidental character in such events as meiosis, natural and X-ray-induced mutation and so on -- and this is in any case obvious and well recognized.


So the very father of quantum spookiness got his vote in early: there is no quantum spookiness involved in consciousness, we are just deterministic machines and we would admit it, in his words, "if there were not the well-known, unpleasant feeling about 'declaring oneself to be a pure mechanism'."

And yet, if you read the epilogue of "What is Life?", which is titled "On Determinism and Free Will", you'll see that, just as his famous equation encompasses the contradition of matter being both a particle and a wave, Schrödinger's personal philosophy embraced the contradiction of being a purely mechanical mechanism but still having the powerful feeling of personal free will. Those who accuse Schrödinger of turning to mysticism are, I think, correct. But we all have to do something with the big, blank page labelled Currently Unknowable, and it's not clear to me that carrying it in a bag marked Mysticism is any worse that carrying it anywhere else.

The Hook

Little to none of this discussion is in my book. The real reason Robert Wright refers to Schrödinger's essay is his observation that the nature of life is to create a temporary island of decreasing entropy, though the 2nd law of thermodynamics is preserved because life emits a waste stream of increased entropy. Therein lies a key to understanding the fundamental nature of computer programming. But you'll have to wait for the book to read about that.

Wednesday, February 11, 2009

Cartesian Programming

I long ago decided that my (not yet finished) book ("The Pop Psychology of Programming", if you're paying attention) had to include a brief history of psychology. The reason is, people have lots of stereotypes and misconceptions of psychology, but what those might be depends a bit upon when you learnt anything about psychology. If most of your psych-ed came from watching 60's TV, then you're still imagining a couch-bound "talking cure". If you took a Psych 101 in the 70's, then you might imagine the field is stuck back in behaviorism. So, a Brief History of Shrinks seems like a plausible way to help get disparate readers more or less on the same page

But what I only recently decided was where to start my History of Psychology. People who write a History of Anything seem to vie with each other to start off at the earliest historical date possible. When it comes to psychology, I believe the winner is the guy who claims there was a pharaoh in ancient Egypt doing psych experiments. Well, I sure ain't gonna start back that far. For a long time, my draft of that chapter started with Freud (having found a cartoon that nicely captures the deconstruction of Freud), but now I've found what the truly best starting point is: Descartes.

Why Descartes?

Descartes is the "I think, therefore I am" dude, and also whom Cartesian coordinates are named after, one of those inventions so taken for granted that it's hard to envision the tedious chaos that preceded it (like automobile cupholders). He does not have that much to do with modern nuts-and-bolts psychology, except for framing a key question that still absorbs a great many great minds today: is the mind something separate from the body?

In that simple-sounding summation is an enormous amount of baggage that can still get even mild-mannered (pot-smoking) philosophy majors red-faced and frothing. Tied up in there are questions of free will and (highly relevant for a synthesis of programming and psychology) whether or not machines can truly "think".

If you're not careful in your reading list, you might skate through a study of psychology and think Descartes' question is none at all -- lots of Smart Folk only exhibit humorous indulgence towards those who still hope to find a Ghost in the Machine (a phrase invented specifically to make fun of Descartes conclusion that mind was separate from brain and body). But even though reductionism has chipped away at the spaces where there might be any room for an ethereal mind/spirit to still be hiding, the folks rooting for the Ghost are in deadly earnest, and not lacking in brain power themselves. Those pinning their hopes on quantum spookiness have no lesser light than physicist Roger Penrose on their side, even though some of them are wildly extrapolating his nubs of true science into flights of fancy. If they are dwindling in number and persuasiveness, well, Kurzweil's "singularity" of machine sentience continues to be in no great hurry to appear and prove them wrong.

Descartes As Programmer

Having finally settled on Descartes as my start, I am pleased to recognize him as having a true, stereotypical programmer personality. A prickly fellow, not inclined to suffer the mental deficiencies of others in silence, I think he almost certainly would have been a programmer today (although, Wolfram-like, he most likely would have insisted on inventing his own computer language rather than deigning to use one invented by his lessers).

But most of my newfound fondness for Descartes comes from the realization that he just wanted to solve everything himself, and not have to pay attention to other folks' solutions. "I think, therefore I am" was his insistence on building on absolutely nothing gotten from no one else. How can one look at Richard Stallman insisting on reinventing Unix from scratch, or Steve Gibson insisting on writing massive applications entirely in assembly language, and not see the spirit (others might choose another word) of Descartes?

As I will write about extensively in my book, the programming industry holds some deeply mistaken views about talent and the brain, and those mistakes push it to hire and encourage just the sort of folks who are not so interested in learning from the works of others. All this mishmash comes together in a golden opportunity to coin a new term: Cartesian Programming.

Cartesian Programming

"Cartesian Programming" (so sez I) is the practice of coding a solution to a problem without making the slightest effort to examine any prior work done on that problem by others. One might phrase it as: "I code, therefore I am (not interested in reading your code)".

There is a fly in my ointment. It turns out that someone else has already coined the term "Cartesian Programming" to refer to some academe-doomed programming language construct whose practical value could not fill a teaspoon, even if you spit into it to help. (Perhaps a sense of phrase-ownership has made me harsh!) But these things are best settled by gentlemanly edit-war on Wikipedia, where opinion goes to be (nearly) made fact. I trust I can generate enough enthusiasts for my definition to mold Wiki-reality my way.

Tuesday, February 10, 2009

Names Can Kill

In my book, I point out that one problem with writing clear code is variable name rot (inspired by Ward Cunningham's discussion of variable names in his 2004 OOPSLA talk). You named a variable something like AccountBalance, but then later had to change the code so that contents of the variable are rounded to the nearest dollar. Someone else then, reading your code as they modify it, fails to realize that AccountBalance really should have been named RoundedAccountBalance, so they perform penny-wise calculations that they should not

When is "Carcinoma" Not Cancer?

As I point out in the book in a footnote, the problem of naming things correctly is not isolated to computer programming. In particular, medicine has naming mistakes that cause enormous problems.

For example, every year, women are diagnosed with Ductal Carcinoma In Situ, or DCIS. Well, we all know that "carcinoma" means "cancer", so it's no wonder that many of these women elect to have their breasts amputated even though the treatment of lumpectomy followed by radiation is just as effective in most cases. There's just one little problem with that: DCIS is not cancer.

People often imagine that cancer is some foreign invader that has horns and a tail when you look at it under a microscope. It's not. Cancer is something going wrong with your own cells, and that "wrong" happens in many stages, so deciding whether or not you have cancer is a judgment call made by a guy you'll never meet who spent a little time (not much -- he has lots of others to process) staring at some of your cells under a microscope.

What DCIS technically is, is a "pre-cancer". The guy with the microscope looked at your cells, said, "well, that's not cancer, but it's definitely funny looking and I think it's real likely to turn into cancer". You certainly should either get it treated, or monitor very frequently for the appearance of cancer.

But the medical community does a lousy job of grappling with this simple question: How many women would avoid breast amputation and elect for lumpectomy followed by radiation instead if DCIS was presented to them as "not cancerous yet" instead of being presented as "Stage Zero breast cancer"? Many doctors do at least take the time to feel a little bad about the astounding percentage of women who elect amputation even though they don't technically have cancer, but the only real result is periodic handwringing in a newspaper article somewhere. Don't get me wrong: electing mastectomy for DCIS can be a rational choice; it's just that the numbers strongly imply it often is an irrational choice, and calling a non-cancer "carcinoma" surely contributes to that irrationality.

The Misnamed "Tumor Cell"

DCIS is a more or less accidental bad naming choice as far as I can tell, but there are other bad naming choices that I suspect arise from a profit motive. One of those is in the news today, and it's called a circulating tumor cell.

Doctors have known for years that people with cancer are real likely to have cancerous cells floating by in the bloodstream. But only recently have people been creating technology that lets someone quantify how many cancer cells you have in your bloodstream in a repeatable way. Remember, deciding whether or not cells are cancerous is a judgment call. A little trick is required to achieve a highly repeatable count of circulating tumor cells.

Here's the trick: there's a particular kind of cell, the epithelial cell, that really shouldn't be floating around in the bloodstream. But if you have a tumor, there's a good chance it will shed these epithelial cells. Aha! Medical science has been getting really good at tagging particular types of cells, so in recent years we've seen the introduction of lab tests that semi-automatically locate and mark those epithelial cells that shouldn't be floating around if you don't have a tumor. Now, a guy still has to look through a microscope, but it's pretty much reduced to a job of counting how many green dots he sees, since the cells of interest have been chemically marked for him.

But wait, it gets even better. An American version of this technology, named CellSearch, managed to get FDA approval, because they showed that (in a very particular situation), if your circulating tumor cell (CTC) count didn't go down shortly after you started chemotherapy, you weren't likely to live as long as the folks whose CTC counts did go down after starting chemotherapy. Cool beans, because this implied a doctor might choose a chemo drug, see that the CTC was not going down, and get at least one more shot at quickly switching to another chemo drug in the hopes it might be more effective.

What's the Naming Problem?

Where does the naming problem come in? Well, as you can see, I've lapsed into saying the phrase "circulating tumor cell" (or "CTC"), which is exactly what the folks who make the CellSearch test want. But is that name really accurate? Remember, this tests works by assuming that an epithelial cell wouldn't be floating around out there unless it was from a tumor. So, the more accurate name (which the CellSearch folks fastidiously avoid) would be "circulating epithelial cell test", because no human actually goes in there and confirms that each of those epithelial cells really is cancerous.

But if that assumption (which is swept under the carpet by using the CTC name) were faulty, how did CellSearch get approved by the FDA? Well, they did a quite impressive study, that's how. Tested a bunch (about 400) of women who came in for a breast biopsy, followed them to see which actually turned out to have cancer, and found that almost nobody who didn't have cancer had a non-zero CTC. Cool beans, because a sample of 400 is about what you need to prove statistical significance... if your sample is representative of the general population. Is it possible that women who are getting a breast biopsy are just not quite representative of the general population of women? I claim it is.

See, we used the CellSearch test on my wife after her breast cancer treatment. Comfortingly, her CTC descended eventually to 0 or 1. Hooray! Until one year, the test came back 12 (very roughly speaking, a "bad" CTC count is 5 or bigger). We were using the test well outside its FDA-mandated application at that point (remember, it was only approved for use trying to guess if your first chemo choice was working), but there was no way around the fact that it looked like bad news. Hoping for the best, I wondered: was there any way those 12 circulating "tumor" cells were actually just circulating epithelial cells that weren't cancerous at all? In fact, even though the CellSearch folks had a great study that said the answer was "no", the answer was "yes".

As I re-educated myself on the whole CTC literature, I discovered that the Europeans have a competing technology, called MAINTRAC. But they had done a fascinating study that the CellSearch folks seem to have no interest in replicating. They wanted to look at the (scary, sorry) idea that cancer surgery itself can send tumor cells out into the blood, possibly raising the odds (but probably not hugely) of the patient getting a deadly metastasis. They tracked the CTC count in the hours after breast cancer surgery and, not to their surprise, found that they went up after surgery. But the part of the study that interested me was the control: they did the same CTC tracking on a benign case, a patient who got surgery but didn't have breast cancer. The CTC number peaked at over 50,000 in that patient. And now I'm back at the naming problem, because if your "circulating tumor cell" test reports a number of 50,000 in a patient that has no tumor, then maybe that's not quite the right name for it. Solution? If you read that paper, you'll see that they are careful to say "circulating epithelial cell" and not "circulating tumor cell". Now, even though they have a test just like CellSearch, the Europeans don't call it a circulating tumor cell test, because they know that name can be misleading. They know that what the test actually counts is epithelial cells, and know of at least one situation where those epithelial cells don't point to cancer at all.

But back to the cliffhanger! My wife gets the scary CTC of 12. Did she have a deep cut that could mimic the epithelial cell mobilization seen in surger? No, but it dawned on me. She exercises. Hard. In fact, she runs a lot of half-marathons, and those come with blisters and who-knows-what kinds of internal damage. So I proposed a little test. No exercise for 2 weeks straight, followed by another CellSearch "CTC" test. The result? Tada! A count of zero. Now what are the odds that you can pick 400 women who just got a breast biopsy, and none of them had run a half-marathon within the previous week? I'm guessing the odds are better than you think, because a) women getting a breast biopsy tend to be older and b) they had time to get real worried about having cancer and cancel any big events (like a half-marathon) between when the doctor found something suspicious and when the biopsy actually got scheduled and performed.

So my hypothesis is (backed up by one data point) that, in contrast to the underlying assertion of the name "circulating tumor cell", the CellSearch test is easy to generate false positives for.

Your Turn, Men

CellSearch is in the news today because it was used in a study of prostate cancer patients. I was dismayed to see that all the news reporting I could find faithfully repeated the name "circulating tumor cell", and absolutely, completely reinforced the idea that this test has no false positives -- even though the Europeans have proven that false. Why does it matter? If the test gets widespread use, some cancer patient is going to do something to get a false positive (exercise too hard? cut themselves shaving? who knows?). And their doctor is going to make a potentially devastating medical decision based on that false positive, a decision they might be much slower to make if the name of the test were not so inaccurate.

Names can kill.

Friday, February 06, 2009

Back Pain and Programming

Old age has made me tend to see the big picture in everything, a world where everything's related to everything else. Today's headline is the failure of imaging (using X-rays, CT, and/or MRI) to improve outcomes in patients with back pain. Which, to me, has much in common with programming methodologies. Have patience; I can connect them.

Back pain has been a snake pit of medicine for years now because a) back pain often gets better even if you do nothing and b) symptoms are highly subjective, so most any treatment can be shown to possibly help a little and c) people will pay big bucks to get rid of the suffering of back pain. It hurts! So, all those factors add up to a field where medical professionals are making big bucks and, truth be told, they are in no hurry at all to conduct a rigorous study of their favorite surgery because it would be a kick in the reimbursement pocketbook if the study showed results no better than placebo.

Now here's a little study about back pain that your local back pain surgeon will never tell you. They tried to figure out what factors predict who will benefit from back pain surgery. After all, no point on cutting everybody open if you can predict in advance which patients are unlikely to feel any better afterwards (besides, they might sue if they feel no better). There were a number of candidate factors they looked at, the obvious medical stuff like exactly which vertebrae looked bad on a scan.

Here comes the study punch line, as it always must. The #1 predictor of whether or not you will benefit from surgery for back pain has nothing to do with your back: it's your psycho-social state. In other words, if you're going through a divorce, your boss hates you, you have few friends or opportunities to enjoy life, etc., then the odds that surgery can fix your back pain are not good.

Now you can try to spin that result in different ways, but just suppose the simplest and most obvious interpretation is true: the root of much back pain is in your mental and social condition. That does not mean your back pain is "all in your head", but rather that back pain is just one of those bad things that can result when you walk around all day clenching your fists, pumping stress hormones, and generally feeling pretty bad about life.

Now to connect the dots. For many businesses, software development is a pain -- in the butt, if not the lower back. However, there are many folks out there waiting to sell your business some solution (new tool, new web service, new methodology, etc.) that promises to reduce your programming pain. Suppose for a moment that, as with lower back pain, a major source of software development pain comes from psycho-social conditions. Maybe the company uses zero-sum bonuses so that if you help your co-worker with her programming problem, you may just be taking bonus money out of your pocket and giving it to her. Maybe the company has an incompetent, top-down review system, so you experience the stress of knowing that you may get a great review for a quarter when you were goofing off, and your worst review when you were working really hard and doing your best work. There are innumerable psycho-social conditions that can help make software development unlikely to go well.

Just like bad back surgeons, that consultant/salesman is happy to sell you something that maybe, kinda, works sometimes for some folks, but in no hurry at all to chip in some funds for a rigorous study that might show their solution is no better than placebo. And make no mistake, there is a placebo effect in business experiments: it's called the Hawthorne effect.

But even though, just like a bad back surgeon, that salesman/consultant can point to some stunning successes, maybe the odds that the proposed cure will help you have little to do with the cure itself, and more to do with what kind of psycho-social shape your organization is in to start with.

But just as with back pain, understanding this may not help. After all, it's much, much easier to buy a new software tool or hire a consultant than it is to make a serious structural change like moving to non-zero-sum incentives, or a review process that identifies poor managers rather than just poor performers. As I will argue in my book, the fact that few programmers or programming organizations can make serious psycho-social changes effectively represents competitive advantage. Those few who can grasp and adapt to the psycho-social forces that make programming harder than it needs to be end up with a business advantage every bit as solid as a patent or trade secret. They won't have to defend it in court, but simply rely on the general difficulty that humans have with change.

Thursday, February 05, 2009

The Halo Effect

The book on top of my desk recently is The Halo Effect, by Philip Rosenzweig. After you've been doing heavy research for some years on a book, you start to see all the threads are cross-connected in ever more complex ways. So, Nassim Taleb provides a cover blurb for The Halo Effect,, and I already lean on Taleb's The Black Swan in my introduction (a grand sweeping attempt to re-view programming in the context of psychology, philosophy, physics(!), and human history), and also used Taleb's Fooled by Randomness in the Psychology of Incompetence chapter.

What does The Halo Effect have to do with the psychology of programming? Actually, the name refers to a psychology study by Thorndike in the 50's, in which he found that superior officers tended to rate subordinates as either good at everything, or lousy at everything. No nuances in between, no people who had both significant strengths and significant weaknesses.

One thing psychology is not so good at is re-integrating its own findings over time. Pyschology researchers go off building up their own particular ideas, and they have little incentive to note how their idea (invariably with its own cute coined terminology) overlaps with others, or with much older research ideas. Starbuck (see below) has insights on this problem for both psychology and social science research. To me, the Halo Effect is pretty much the Fundamental Attribution Error examined in a context of group dynamics, and that is how I will use it.

The first nitty-gritty psychology chapter in my book is Attribution Errors, because I think the Fundamental Attribution Error is really the most simple and influential psychology concept you can learn, especially as applied to others (most of the book is focused on the individual mind, so it's good to get something group-related out at the beginning). Rosenzweig's book gives me a slightly different slant on the FAE and, being at least modestly academic, provides some relevant references into the psych research literature, which I always appreciate for my next research trip to the University of Washington.

Rosenzweig relentlessly dismantles the validity of Jim Collins' uber-popular business books, based on fundamental flaws in his research model. This makes me look to see if Rosenzweig is connected to another thread: William Starbuck. But no, "Starbuck" is not in the index (though "Starbucks" is). Did Rosenzweig really not read Starbuck's astounding The Production of Knowledge, in which he dismantles, not just the research methodology of some popular business books, but the entire field of social science research? OK, well Starbuck was recent, 2006. But Jerome Kagan is not in the index either, and he was pointing to problems with research that relies on questionnaires at least as far back as 1989, in his book "Unstable Ideas". Kagan never lets himself forget that he long ago mistakenly believed (and taught) that autism was caused by the behavior of mothers; he uses the memory of that mistake to maintain a high degree of skepticism about the limits of research.

This is the curse of modern academic life. The sheer volume of ideas produced and published each year guarantees that you will overlook some useful ideas that are highly relevant to your own. All you can do is push your ideas out there, and hope the connections you missed don't completely invalidate your work, and that they will be spotted and put together by someone else.

The flip side of this curse is that it makes possible the modern Renaissance man (or woman) of science. When Richard Feynman made his brief foray into biology, he quipped that he was able to quickly make a contribution because he hadn't wasted all that time learning the names of every little body part, like a "real" biologist has to do. In this world of information overload, the amateur who is willing to plunge into hours of reading just looking for connections can sometimes make a contribution in a field that nominally requires deep expertise.

Thus, just yesterday I find myself, sans medical degree, writing to the author of a medical study appearing in the headlines this week to point out what the experts in his field have overlooked. The headlines were about the discovery that kidney failure patients on dialysis who live at high altitudes do better than those at low altitudes. The renal specialists, of course, imagine that this must be somehow connected to the hypoxia of altitude stimulating more red blood cell production. What I know, that they don't, is that a) it takes more altitude than they imagine to stimulate red blood cell production (that literature lies in sports medicine, which nephrologists do not read) and b) there was a recently discovered amazing effect: oxygen breathing can stimulate EPO, the natural hormone that tells the bone marrow to make more red blood cells.

The trick for me is knowing that nurses will put a dialysis patient on oxygen if their oximeter indicates serum levels are low. Thus, the most likely way that altitude influences dialysis patient outcome is by virtue of the fact that they are getting more oxygen, and their caregivers are unaware that this can stimulate red blood cell production, just like giving them a shot of Procrit.

Of course, as my mother-in-law likes to exclaim "But you don't get paid for any of that!". Which is true, and makes me realize it's time to get back to writing my damn book.