Friday, April 09, 2010

Why Johnny Journalist Can't Spell

Maybe it's just me (it's not), but it seems like spelling words correctly is no longer viewed as a requirement for writing. Of course, with average citizens publishing their every thought and deed, nobody could be surprised that incorrect spelling and grammar would be the least of the problems with the daily content spew. This is just a blog, for example. I have no copyeditor to check my work before it goes online, only a couple of re-reads by yours truly. But shouldn't we still be able to muster just a small bit of concern that major news organizations can no longer spell?

Of course, The Daily Show makes regular sport of the absurdities that appear in "the crawl" beneath the never-ending news channels. But that's like shooting fish in a barrel. As correctly pointed out by yet another comedy show (30 Rock), It's a 24-hour news channel, we don't have time to do it right anymore.

But what the hell happened to print journalists? I'm no longer surprised to see any sort of boner even in the online presence of The New York Times. But why? Did they fire all the copyeditors when they started putting their copy on the Web? Is it an insidious attempt to fit in with the thumb-banging generation that u no is ROFL? It's a puzzle.

It's a puzzle, but today I got a hint from The Atlantic. During their stalwart coverage of important issues, I came across a new (non-)word: maritial. This is a big clue.

Of course, confusing of "marital" and "martial" is an ancient source of humor. But this Spooneristic spelling is something a little different, not just the transposing of one word for another, but the inventing of a new word. Do you see the clue? This word would never appear in any dictionary, whether paper or electronic. The conclusion is inescapable: The Atlantic does not even run online copy through a spelling checker before publishing!

I'm from the Government a Piece of Software, I'm Here to Help

OK, so dumb old news media can't even punch a button to get their copy spell-checked. So what? Here comes the blog-worthy twist: I blame us. We the programmers who automate tasks with our software, who put human copyeditors out of business -- this is all our fault.

I write in the book about how automated tools can make us less competent. There's a psych study that shows that experts given grammar and spelling checking tools in their word processor begin to lose expertise. But what's happened here is one meta-level removed from that. I give you Burk's Law of Automation:

To Automate a Task is to Devalue It

Consider the day of the copyeditor. Mistakes could still be made, but they besmirched someone's reputation. But to pay a copyeditor and then utterly fail to have them review copy... Well, that might result in some high-level meetings and reprimands.

Now consider the day of the automated copyedit, the spelling and grammar checking software delivered by us programmers. Failing to review the copy is now just failure to press a button. Anybody could forget to press a button. It's not like there's a separate employee sitting there whose sole job is to press the button.

We the programmers are the real source of the decline of journalistic standards. We automated spelling and grammar checks (in that shoddy, works-good-enough-to-sell kind of way we automate things) and psychology did the rest. If the computer can do it (never mind that it can't do it that well), then it's not that important. When you print the non-word maritial in your magazine, it's not a failure of a trained professional to do their job, it's a failure to push a button, something a monkey could do. The resulting flaw is exactly the same, but the use of software makes the flaw less important in our minds.

This problem is intertwined with the problem of surrendering authority to machines, of giving them undeserved agency. The computer becomes, not just an additional tool for checking grammar and spelling, but responsible for checking grammar and spelling. Computers can do many things, but they cannot be responsible. They cannot feel shame, be punished, be found legally liable, be rewarded, or take pride in their work.

Transparent Limitations

The problem here is not as specific as a few typos in print. It is a general and growing problem that people increasingly surrender authority to software as they collectively suffer ever more ignorance about software's limitations. It behooves programmers to do something they've invested little effort in in the past: make the limitations of software transparent.

Is it our fault that people don't push the button to perform grammar and spelling checks before publishing? Actually, it is. If you know that people are going to reliably fail to perform a check, to merely claim that they "should" behave in ways that psychology guarantees they won't is to simply be complicit in the problem, one step away from the cities that tweak yellow-light durations to raise more money from traffic tickets. As programmers, we are all too often offloading responsibility onto the future, distant, removed user. Since they will likewise end up offloading responsibility onto our software, small wonder that we engender situations where no one believes themselves responsible. That's a small thing when the result is a misspelled word, not so small when the result is a misdiagnosed X-ray.

But reminding the user to push a button is not the real meat of the issue (though it is entirely neglected: does your email client display in red the number of typos when you go to press the "Send" button?). The real issue is transparency of limitations. Does your grammar checker give you an estimate of the number of grammar errors it may have overlooked based on the size and complexity of the text? Has any programmer even ever considered tackling that problem? Likewise, automated software that helps doctors read X-rays needs to continually remind of its own false-positive and false-negative statistics. And, of course, a voting machine system that offers a "recount" button that merely reprints the same number from memory is so opaque about its limitations as to be fraudulent.

The Future Ain't Bright

Alas, being transparent about your limitations conflicts with the goal of selling software. The number of known bugs (heaven forbid we would attempt to estimate the number of unknown bugs!) in our software is generally treated as a secret or, in open source software, as another means of avoiding responsibility ("you have the source -- you could fix those bugs yourself!"). Who will buy the word-processing system that estimates the number of flaws it may have missed when the competition simply says nothing and hopes you'll infer it is flawless? As far as I can see, it is in everyone's short-term interests to use software as a general tool for avoiding responsibility. Short of declaring maritial law, I can't see this ever changing.