by Doctor Science
Mister Doctor Science just re-read Robert Heinlein's The Door Into Summer for the first time in many years. One thing that really struck him this time around was how far off Heinlein's predictions were about basic technological change. Not just about social or scientific changes (nuclear war, cold sleep, time travel), but about developments in robotics and computing.
The Door Into Summer was written in 1956 (around the time the future Mister Doctor and I were born), and has parts set in "the future" -- 1970 (about when MDS and I first read it) -- and "the far future" -- 2000. The hero is an engineer-inventor, who invents (basically) the Roomba, then a multi-purpose robot, a self-driving car, and, eventually, AutoCAD.
What's interesting is not just that Heinlein got the speed of these changes all wrong -- when MDS and I were first reading the book, there were no Roombas -- but that he got the *order* of the inventions wrong. He imagined Roomba => self-driving car => AutoCAD, but in reality AutoCAD came 20 years before the Roomba, and self-driving cars will probably take about 20 years after that to be in wide use. The multi-purpose robots Heinlein imagined not only aren't around yet, no-one is seriously planning to make them.
I think Heinlein fell into a trap that caught (among others) at least one generation of AI researchers: assuming that tasks that are easy for humans are intrinsically easy tasks, while tasks that are difficult for humans are intrinsically hard. Vacuuming is very easy for humans and takes almost no training, so he figured Roombas would be easy to make. Basic household tasks are considered "unskilled", menial labor, so they must be easy. On the other hand, engineering design takes a lot of learning, so he thought of AutoCAD as something that must be difficult.
But in retrospect it's easy to see that AutoCAD takes place in a limited work-universe, with extremely clear parameters and logical processes -- just the sort of thing computers and robots are really good at. Driving a car, on the other hand, requires interacting with a complex, changing, unpredictable environment -- just because a teenager can learn it doesn't mean it's a simple task.
Worst of all, it turns out, is natural language processing. I don't know of *anyone* who thought (back in the 50s or 60s, say) that making a computer to understand and produce natural language would be hugely difficult -- after all, two-year-olds do it, how hard could it be?
It took a lot of experience (aka "failure") to appreciate that human toddlers can learn language not because it's easy, but because our brains are specialized for this extremely complex task. I think that's why you don't often see animals (or mentally-disabled people) described as e.g. "has the intelligence of a three-year-old". In fact, one of the great shocks for me of rearing an infant was realizing that she was already just as intelligent as an adult -- she just didn't *know* anything yet. I was used to dogs and cats, but an infant human puts things together so much faster it's almost scary, given that you have to keep one step ahead at all times.
Anyway, I find it interesting to think about, not what particular SF stories got right, but what they got *wrong* -- things that looked much more difficult than they turned out to be (space travel), or much simpler. Robot arms, maybe. Or small, powerful computers -- Heinlein had people still using slide rules when interplanetary travel was common.
Recent Comments