Do Androids Dream of Misplaced Modifiers? #change11

NPR reports that automated grading systems might be better than human teachers at identifying syntax, grammar, and punctuation errors on essays. Oh, and the computer is much faster, able to to grade 16,000 essays in 20 seconds.

But here’s the rub. Surprise, surprise, the computer is generally fricking clueless about anything that really matters in academic life:

What the automated readers aren’t good at, he says, is comprehension and whether a sentence is factually true or not. They also have a hard time with other forms of writing, like poetry.

Yes. Brilliant. If not for the small matter of comprehension, these computers would be outperforming us. Also, don’t forget about the facts. And poetry. And creativity. And critical thinking.

But, don’t worry, Hal is good at everything else, like spell check. And assembling bird houses.

It’s easy to forget that “artificial intelligence” does not (indeed, cannot) mean replicated human intelligence. No, it’s all together something different. It’s algorithmic. It’s partial, molecular, piecemeal. It can simulate the thousands of individual movements that create a flock of birds, without ever seeing the flock. Certainly not appreciating it, or pondering what its organization out of chaos suggests about existence.

This recent Wired article by Steven Levy describes the shift that occured in AI studies after experts realized the human brain is too complicated to be imitated. Researchers settled, instead, for something less than human:

Today’s AI bears little resemblance to its initial conception. The field’s trailblazers in the 1950s and ’60s believed success lay in mimicking the logic-based reasoning that human brains were thought to use. In 1957, the AI crowd confidently predicted that machines would soon be able to replicate all kinds of human mental achievements. But that turned out to be wildly unachievable, in part because we still don’t really understand how the brain works, much less how to re-create it.

So during the ’80s, graduate students began to focus on the kinds of skills for which computers were well-suited and found they could build something like intelligence from groups of systems that operated according to their own kind of reasoning. “The big surprise is that intelligence isn’t a unitary thing,” says Danny Hillis, who cofounded Thinking Machines, a company that made massively parallel supercomputers. “What we’ve learned is that it’s all kinds of different behaviors.”

True, I would say to Danny Hillis, but you’re missing a big point: there are degrees of complexity, subtlety, and depth in the different behaviors of thinking. For argument’s sake we’ll say there are three levels, using Gregory Bateson’s typology for learning:  Level 1: rote learning, Level 2: constructing meaning, and Level 3: transcending meaning. I’ll maybe talk about these in detail some other time.

However, for now, it seems important to point out that the previously-mentioned grading software was only capable of checking for Level 1 learning. Actually, only certain aspects of Level 1, since it fails as a fact-checker. This kind of software would be good for students as a tool similar to spell check, but it’s good for little else.

Here is my take-away: Computers and artificial intelligence still seem light years away from being able to match the sophistication of the human brain, its millions of years of evolution, and its 100,000 years of human culture. If robots want to go off and create their own literature that speaks to them (if such a communication were possible) then we should respect them. Otherwise, they have some work to do.

Finally, if you’re the kind of teacher who only focuses on grammar, spelling, syntax, and punctuation, you better start looking for another job because a server in India will be putting you out of work shortly.

Leave a comment


  1. Mary Pringle

     /  April 26, 2012

    I don’t deny that grammar, spelling, and punctuation aren’t what writing is about–though copy editing is a necessary step–but there is a big difference between a computer that can identify errors and a computer that can teach students not to make them. That would be a much smarter machine. My composition students can barely use a spell checker. The Word grammar checker stymies most of them. If a machine would teach my students what to do about that little green line, I would be most grateful. But where is it?

    • Andrew Neuendorf

       /  April 27, 2012

      I’m just thinking in terms of a cost/benefit analysis. If a human worker can be replaced by a machine (or outsourced to someone with a lower wage) then it will happen, so long as technocrats and money managers are running public education. I don’t think pedagogy or research will slow their thinking much. If, for example, a school district can get away with putting 70 students in one online English course underwritten by a corporate education entity, then it will happen, and English robots will serve as TA’s. I realize I’m verging on a dystopic science fiction version of events here, but that’s where I’m at.

      I completely agree with you that a human teacher is superior to a spell check, grammar robot, or any computer program. But I think the gap will close. To me, this means that higher order thinking, creativity, contemplation, even, will become more important, since I don’t think we’re anywhere near being able to replicate these things artificially. (It ain’t called the Humanities for nothing!) Actually, this is kind of true for any job featuring skills that could be automated or outsourced. Thanks for reading!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: