by Doctor Science
Mister Doctor Science just re-read Robert Heinlein's The Door Into Summer for the first time in many years. One thing that really struck him this time around was how far off Heinlein's predictions were about basic technological change. Not just about social or scientific changes (nuclear war, cold sleep, time travel), but about developments in robotics and computing.
The Door Into Summer was written in 1956 (around the time the future Mister Doctor and I were born), and has parts set in "the future" -- 1970 (about when MDS and I first read it) -- and "the far future" -- 2000. The hero is an engineer-inventor, who invents (basically) the Roomba, then a multi-purpose robot, a self-driving car, and, eventually, AutoCAD.
What's interesting is not just that Heinlein got the speed of these changes all wrong -- when MDS and I were first reading the book, there were no Roombas -- but that he got the *order* of the inventions wrong. He imagined Roomba => self-driving car => AutoCAD, but in reality AutoCAD came 20 years before the Roomba, and self-driving cars will probably take about 20 years after that to be in wide use. The multi-purpose robots Heinlein imagined not only aren't around yet, no-one is seriously planning to make them.
I think Heinlein fell into a trap that caught (among others) at least one generation of AI researchers: assuming that tasks that are easy for humans are intrinsically easy tasks, while tasks that are difficult for humans are intrinsically hard. Vacuuming is very easy for humans and takes almost no training, so he figured Roombas would be easy to make. Basic household tasks are considered "unskilled", menial labor, so they must be easy. On the other hand, engineering design takes a lot of learning, so he thought of AutoCAD as something that must be difficult.
But in retrospect it's easy to see that AutoCAD takes place in a limited work-universe, with extremely clear parameters and logical processes -- just the sort of thing computers and robots are really good at. Driving a car, on the other hand, requires interacting with a complex, changing, unpredictable environment -- just because a teenager can learn it doesn't mean it's a simple task.
Worst of all, it turns out, is natural language processing. I don't know of *anyone* who thought (back in the 50s or 60s, say) that making a computer to understand and produce natural language would be hugely difficult -- after all, two-year-olds do it, how hard could it be?
It took a lot of experience (aka "failure") to appreciate that human toddlers can learn language not because it's easy, but because our brains are specialized for this extremely complex task. I think that's why you don't often see animals (or mentally-disabled people) described as e.g. "has the intelligence of a three-year-old". In fact, one of the great shocks for me of rearing an infant was realizing that she was already just as intelligent as an adult -- she just didn't *know* anything yet. I was used to dogs and cats, but an infant human puts things together so much faster it's almost scary, given that you have to keep one step ahead at all times.
Anyway, I find it interesting to think about, not what particular SF stories got right, but what they got *wrong* -- things that looked much more difficult than they turned out to be (space travel), or much simpler. Robot arms, maybe. Or small, powerful computers -- Heinlein had people still using slide rules when interplanetary travel was common.
Always in motion is the future.
Posted by: Yoda | December 03, 2014 at 04:54 PM
McNabb was wrong about waistlines of the future wrong too.
And what's with the teenager (?) on the right sitting butt-naked on the kitchen counter not three feet from the giant fruit?
Were they predicting bacteria- and microbe- resistant surface materials?
Posted by: Countme-In | December 03, 2014 at 05:04 PM
I don't think any 50's era SF gets anywhere near a reasonable prediction about computers and computing technology.
I grew up on that stuff. I was born in the 60's and spent a lot of of my misspent youth with Asimov, RAH, Bradbury and the like. I remember the first time I saw the big wall of SF books in my Junior High School library, I resolved to just start at A and work my way to Z.
It's funny now to look back and see what their assumptions were. A talking, humanoid robot that vacuums the floor and sets the dishes? Totally believable. A pocket-sized computer cheap enough for everyone to have one? That's just crazy talk.
It makes me wonder, what are we missing now? What great revolution is just over the horizon that we just don't see?
Posted by: Chuchundra | December 03, 2014 at 05:16 PM
Or germ-free teenagers?
Posted by: dr ngo | December 03, 2014 at 05:16 PM
I copy my own comment, from the May 2014 world-building thread, because I am too lazy to type it again:
"There was a Philip Jose Farmer story written in the '60's in which a spaceship, travelling beyond the speed of light, bursts out of our universe, and then a crewwoman goes mad and throws herself out of the ship. Because the ship and body are outside the universe, they all have roughly the same mass and so they and the universe are all in orbit around one another. The ship's computer is madly spitting out punch cards as it tries to provide sufficiently good estimated solutions to the three-body problem that the crew can determine whether the ship will run into one of the other objects."
Posted by: JakeB | December 03, 2014 at 05:27 PM
"What great revolution is just over the horizon that we just don't see?"
The wearable martini patch?
The surround sound group orgasmatron with earth moving-simulation gyroscopes?
Driverless police cars, so the cops have their hands free for the more efficient drive-by shootings, instead of this time-consuming, steering, braking and turning off of the ignition.
The pocket drone?
Guns with sensors that detect human flesh and jam, no matter what?
The virtual gin and tonic?
The disposable husband, in bulk?
The automatic in-bar-stool flatulence suppressor.
Replacement brothers.
A kissing booth app on your smartphone?
The carless driver?
Posted by: Countme-In | December 03, 2014 at 05:55 PM
Heinlein predicted the Really Good Battery, and that still hasn't happened.
Oh.
Posted by: Slartibartfast | December 03, 2014 at 09:46 PM
I was (and still am) a Philip K. Dick fan and he got a number of things right. Of course, he always imagined dystopias, so he had a bit of an advantage right there...
Posted by: liberal japonicus | December 03, 2014 at 10:49 PM
Philip K. Dick makes me want to dial a 3.
Posted by: Slartibartfast | December 03, 2014 at 11:24 PM
The order in which these things came out looks obvious and pretty well inevitable. Now. But how far back does the inevitability date?
You could start, obviously, at 1982, when AutoCAD came out; clearly it had won. You could also argue that AutoCAD of 1982 wasn't the AutoCAD that Dr. Science is talking about, in either a technical or a marketing or societal sense; and you'd have a point. (It is not easy for me to get my mind around the fact that the program is an entire generation old, and I could say to my grown son "This is not your father's AutoCAD.")
Still, any reasonable choice for the AutoCAD date leaves it well ahead of Roomba. In 1981, say, was that obvious? Or earlier?
I'd say it was, but am not unbiased: having been involved in that 1982 product, I would seem to have had a positive assessment of its feasibility and at least the remote possibility of its success. I leave the question to somebody else.
As to the driverless car, I remember from long ago a comment by Norbert Wiener, who had so much to do with the origin of automated systems. It may have been in _The Human Use of Human Beings_ (1950/54, read by me a little later) or maybe _God and Golem Incorporated_ (1964) that he expressed a desire concerning the first driverless car that would go on the road: not to be in it. And the reason, as I recall, was essentially that in the OP: driving is not that easy a job for a computer.
One more anecdote to follow, to avoid making this too long.
Posted by: Porlock Junior | December 04, 2014 at 02:42 AM
In the mid-late 1960s the Stanford AI lab was working on an automaton that would solve a puzzle that was popular in those days before Rubik. You dump a set of four multi-colored blocks on the table and stack them squarely so that each side of the column is entirey of one color. The task was to solve the puzlle from start to end: The computer is to look at the blocks with its TV camera, solve the problem of how they need to be turned over and stacked, and then stack them.
I attended a seminar on this; it must have been in 1967 when I was at Berkeley, when they had progressed pretty far with it. What impressed me was the relative timings of the steps. Figuring what was where from the camera pic took some computing time; solving the problem took very ittle; and then it thought for a long time to figure out the arm motions required to actually pick the things up and stack them.
They're much better at that now, obviously. But the different rank-orderings of tasks for computers and for nervous systems was getting pretty obvious.
Posted by: Porlock Junior | December 04, 2014 at 03:07 AM
Porlock jr, a guest post with your name on it?
Posted by: liberal japonicus | December 04, 2014 at 05:22 AM
natural language processing is ridiculously complicated, using today's programming techniques anyway.
my day job involves automated mailing address standardization. address in -> standardized address out + post office discount for making their life easier.
and even something as simple as a three-line US address can be munged up in ways that make it impossible to parse into a real address using a solution that can also parse all other addresses. but when a person looks at one of these borked addresses, the intended destination is obvious - "yeah, that '3' is actually the unit number, not the last digit of the primary number." but telling the code how to handle this situation, while not breaking every other address, can be nearly impossible at times.
i think a lot of it has to do with the way we write programs. hard logic with lexing separated from parsing and parsing separated from the domain knowledge (the USPS list of addresses) is wrong. humans don't work that way. we don't go through an address character by character, splitting it into tokens and then carrying those tokens over to our list of street names and numbers.
but that's how today's programming languages force you to handle it.
i think, if we're ever going to get AI, it's going to happen using a programming paradigm that's vastly different from what we've been doing since the 1960s.
Posted by: cleek | December 04, 2014 at 07:32 AM
Does anyone recall when 3-D printing was predicted? That seems to me like one of the great leaps forward that got generally missed.
Posted by: wj | December 04, 2014 at 08:03 AM
It's interesting to contrast the (slow) development of self-driving cars and the development of drones, some of which seem to be semi-autonomous.
I guess the lesson is "progress can be rapid, if you don't care about collateral damage"
Posted by: Snarki, child of Loki | December 04, 2014 at 08:43 AM
natural language processing is ridiculously complicated, using today's programming techniques anyway.
[...]
i think, if we're ever going to get AI, it's going to happen using a programming paradigm that's vastly different from what we've been doing since the 1960s.
It kind of is what we see happening now, actually. I'm about 6y out of date with NLP, but when I was last in school that was the domain I was enmeshed in, and the areas with "hard" problems which saw very rapid progress in the last two decades generally were the ones who eschewed traditional rule-based AI approaches and just went with statistical approximations based on machine learning and very large data sets. And as much as my rationalist mind is loathe to admit it, it works. The impact of the statistical revolution in language technology over the last 20 years is nothing short of amazing. So while it's not a panacea, the paradigm shift you're talking about has already happened in a lot of NLP sub-domains.
Posted by: Nombrilisme Vide | December 04, 2014 at 09:03 AM
Progress can appear rapid if you have a very small number of other objects moving about the same space. And those unlikely to take random and abrupt changes of direction. (No animals or small children piloting aircraft. And the drivers are less likely to do something crazy, too.) Hence the ease of programming basically autonomous drones.
Posted by: wj | December 04, 2014 at 09:09 AM
the drivers are less likely to do something crazy
that's the part that makes me roll my eyes whenever someone says driverless cars are right around the corner.
the hard part about driving isn't keeping your car on the road; that's just a matter of seeing where the road is and knowing how a steering wheel works. the problem with driving is all the other drivers, and cyclists, and pedestrians, and deer - all the unpredictable, illogical, autonomous stuff out there with you.
driverless cars that share roads with people will require situational awareness and a pretty deep understanding of human behavior. that's much harder than staying between the lines and watching for red lights.
Posted by: cleek | December 04, 2014 at 09:47 AM
@NV --
You might have already seen this, but here's a link to an old Language Log post that points out that an aggregate bigram model (a simple model in terms of machine language learning) gives an estimate that the sentence "Colorless green ideas sleep furiously" is far more likely to occur in language than is "Furiously sleep ideas green colorless"; these of course being the sentences that Chomsky used in his successful argument that semantics and syntax should be treated entirely distinctly, since the first sentence is grammatical but the second is not.
Posted by: JakeB | December 04, 2014 at 10:04 AM
"The order in which these things came out looks obvious and pretty well inevitable. Now. But how far back does the inevitability date?"
Think about it ahead of time:
1) 'Auto CAD' requires that the computer store sets of coordinates (I'll say 4-D, to allow time) for points, to draw lines and shapes. Many of which would be simply plugging in a template, such as circular cross-sections. Even back in the day, that's not that much data. No interaction with a real-time environment is needed.
2) A roomba involves moving around a far more simple and constrained environment than a car would encounter.
3) A car means being able to deal with a highly complex, constantly changing environment, with few rules governing the players, and complexity includes things like light and traction.
Posted by: Barry | December 04, 2014 at 12:38 PM
"driverless cars that share roads with people will require situational awareness and a pretty deep understanding of human behavior. that's much harder than staying between the lines and watching for red lights."
Yes, the fastest route to driverless cars is to make ALL cars driverless or, at a minimum, make a driverless car environment available.
Posted by: Marty | December 04, 2014 at 12:44 PM
yep.
but a driverless car that can't leave its environment is basically a 5-person train car. and having a bunch of small vehicles all driving the same set of constrained routes is inefficient. you might as well just lay train tracks.
Posted by: cleek | December 04, 2014 at 01:01 PM
Car insurance is going to be very interesting with the driverless car.
What happens if a perfectly sober person driving a traditional car collides with a driverless car and its drunk passenger passed out in the back seat?
I think we're going to need some new laws, but as Brett and other will tell you, then the cops will forced to kill a lot of people.
Posted by: Countme-In | December 04, 2014 at 01:26 PM
I think Heinlein's shortcoming, in general, was failure to anticipate how really difficult artificial intelligence and learning would turn out to be. Robots and automatic, adaptive machines of all kinds suffer. The Moon Is A Harsh Mistress was I think the worst of his sins.
I am not sure any skiffy writers of that time did, actually. Asimov put his thinking robots so far in the future that it's at least plausible, given what we know now, but not everyone did.
Clarke's HAL was I think quite a bit more complicated than we could have managed in 2001, in terms of parsing and autonomy.
Posted by: Slartibartfast | December 04, 2014 at 02:05 PM
I am assuming the first step is hybrid driverless cars with driverless interstates, maybe real hov lanes that are driverless. Some controllable environments.
Although, while driverless cars may "cause" accidents, avoiding them, or more likely,choosing between accidents is not as complex as one might think.
Posted by: Marty | December 04, 2014 at 02:23 PM
If all cars are driverless, then they can all signal their intentions to their near neighbors, who will then have more time to respond. Easier-peasier. Swarm control is harder than you'd think, but it's a lot easier when the "swarm" is limited to some small integer number of columns, changing columns as needed.
You'd definitely need roads that were altered to accommodate driverless. One of the hard parts of the driverless convoy, in the middle east, is being able to tell where the road IS. So even though driverless convoys are following the leader (the lead vehicle has a human driver), staying in the lane can be difficult.
It's when driverless cars mix up with human drivers that things get dicey. All of one, or all of the other is the simplest way to go.
Posted by: Slartibartfast | December 04, 2014 at 02:36 PM
The estimates about chess playing computers were not that far off given that the predictions were made around the time the first real computers were built. At the predicted time computers had at least reached the master class but not yet surpassed all human players. But I still think they overestimated AI and underestimated raw computing power, i.e. they thought it would be mainly the former and not the brute force approach that has (until now) dominated the field.
Poe was totally off there though. He believed that any chess automaton would be unbeatable once it had grasped the rules. That was his one faulty argument why the Chess Turk was a fraud (while every other argument in his famous paper on that turned out to be true).
Posted by: Hartmut | December 04, 2014 at 03:40 PM
we'll have FTL transporter technology before we have safe driverless cars that can go anywhere on-demand (which is the entire draw of the car).
Posted by: cleek | December 04, 2014 at 04:09 PM
The estimates about chess playing computers were not that far off given that the predictions were made around the time the first real computers were built. At the predicted time computers had at least reached the master class but not yet surpassed all human players.
I thought that Kasparov, after having lost to Deep Blue (?), should have invited the computer to share a sandwich with him.
Posted by: Ugh | December 04, 2014 at 05:00 PM
@liberal japonicus
Thank you, sounds like a good idea. But I don't think I could get it together just now, being laid up with an infection that gives me lots of leisure to read stuff on tne Net but rather less mental acuity. (I assume you have my e-mail address in the site records, though; send a note if you want to pursue anything.)
Posted by: Porlock Junior | December 04, 2014 at 06:30 PM
How pleasant: several people seem to have come up with the same idea I've had lately, that driverless cars could be useful in an environment that lacked those flaky hominids.
Where I differ with cleek is that driverless roads could be very useful, particularly in the hybrid form that Marty describes. I think they could be designed -- financing this is another matter -- to be very fast and fuel-efficient and safe, which could be appealing.
Posted by: Porlock Junior | December 04, 2014 at 06:35 PM
@Hartmut -
"The estimates about chess playing computers were not that far off given that the predictions were made around the time the first real computers were built."
Hmm, might depend on one's preferred value of Not that far.
There was a longish period, it seemed to me, when high-level computer chess seemed rather like controlled fusion: always 20 years or so in the future. That ended, of course, for one of those 2 fields.
No doubt, though, about the bad predictions of the effect of clever algorithms versus raw power. That lesson was becoming clear to many in the AI field by the late 60s, but it still took some time to get to the Deep Blue level and beyond.
Posted by: Porlock Junior | December 04, 2014 at 06:42 PM
At this point, controlled fusion is minus 62 years in the future, by my estimate. A lot of people just don't like the form it arrived in.
Posted by: Brett Bellmore | December 04, 2014 at 07:17 PM
I suspect SF writers have used a lot of futuristic technology as plot devices without giving much thought to whether it could ever become a reality. Or when.
Chuchundra: "What great revolution is just over the horizon that we just don't see?"
Ignoring ray-guns and such, lasers and their impact were just about totally unforeseen. Now, the direct and indirect use of lasers makes up a large percentage of the world economy.
JakeB: "...solutions to the three-body problem..."
We now know that three-body problems can't be solved, though limited, and perhaps useful, predictions can be made.
cleek: "...pretty deep understanding of human behavior."
Except for, perhaps, dealing with passengers, a driverless car may not have to understand anything about human behavior. It need only accurately sense the things in its environment that impact the effective and safe maneuvering of the car. It can just about totally ignore everything else. Driving seems so complex to us because we are aware of, and distracted by, so many things that have nothing to do with driving the vehicle. Also, a driverless car would have a very much smaller reaction envelope than a driver has. It can detect a change in its environment and execute the best reaction to it before a driver even becomes aware of the change. An advantage of driverless cars is that when some improvement is made in driving technique, all of them can know about it almost instantly. This would be a huge improvement over the current practice of training from scratch every single hormone saturated driving unit.
Posted by: CharlesWT | December 04, 2014 at 07:39 PM
Will driverless cars be programmed to obey speed limits?
If so, will libertarians ever buy them?
--TP
Posted by: Tony P. | December 04, 2014 at 09:27 PM
"There wasn't any speeding even though, ironically, Google's engineers have determined that speeding actually is safer than going the speed limit in some circumstances.
[...]
Google's driverless car is programmed to stay within the speed limit, mostly. Research shows that sticking to the speed limit when other cars are going much faster actually can be dangerous, Dolgov says, so its autonomous car can go up to 10 mph above the speed limit when traffic conditions warrant."
Do Self-Driving Cars Make Speed Limits Obsolete?
Posted by: CharlesWT | December 04, 2014 at 09:36 PM
Google Testing Its Self-Driving Cars In A Complete Virtual "Matrix California": They must have taken the red pill.
Posted by: CharlesWT | December 04, 2014 at 10:06 PM
"At this point, controlled fusion is minus 62 years in the future, by my estimate."
Yeah, sure, except, as you surely know, the "controlled" in that phrase is meant quite specifically to differentiate one that, once started, is subject to modification of its reaction rate and duration -- control, as one might say -- from a fusion reaction which, once started, is not subject to any further control till it's damn well done -- a sort of big bang reaction, if the phrase hadn't already been taken.
Posted by: Porlock Junior | December 05, 2014 at 01:45 AM
So, you control when the reaction happens by when you start it, and the size of the reaction by which fuel pellet you use. There are even fusion 'bombs' which can have their yield 'dialed', by injecting greater or lesser amounts of fuel into them. It isn't as though there weren't useful chemical processes in industry which are no more capable of being modulated once you start them.
It's still controllable enough that we could have had fusion powered spacecraft vastly superior to anything we have now, and fusion power plants, if the word "bomb" didn't make some people go all wonky.
"Will driverless cars be programmed to obey speed limits?
If so, will libertarians ever buy them?"
Would ANYBODY buy them? It isn't as though the majority of people obey speed limits. It's well known that speed limits are deliberately set below the safe, natural speed on a road, to enhance revenue. They set them with the intention that they be routinely violated, so they can fine people when they want the money.
Having self-driving cars that had to observe all speed limits would amount to a full time nation-wide rolling blockade. It would be so disruptive that... Hm, maybe a good idea after all, we might get the speed limits set rationally!
Posted by: Brett Bellmore | December 05, 2014 at 04:57 AM
Porlock, if the spirit moves you, send a text file to the email under the kitty. I'd love to know more about the process of putting together that first AutoCAD program.
Posted by: liberal japonicus | December 05, 2014 at 08:30 AM
Brett, over here the speed limits are set the way they are because it has been observed that people drive a certain value (10-11 km/h on average) above it. So, if the authorities think that 60 km/h are safe, they put a 50km/h sign there. If they put a 60 sign there then people would drive 70. People do not usually ignore the signs (unless they seem stupid at face value) but read them more or less as: That's the speed you are expected to have (lower limit). The police is expected to stop only those that go more than 11 km/h faster and those that go too far below. So, if the sign says 60 and you drive only say 30 without a sound reason (and thereby disturb the flow of traffic), you'll get fined, and you'll get a speeding fine at 72.
In the big cities the police announces in advance where they will put the mobile speed traps.
In the past the revenue creation played a role but not anymore or for too long (iirc there were some court decisions about it).
Posted by: Hartmut | December 05, 2014 at 09:08 AM
It need only accurately sense the things in its environment that impact the effective and safe maneuvering of the car.
which is, in itself, a monumental task: object identification and tracking, at high speeds, with ever-changing lines-of-sight.
but you can't just react your way through traffic. sometimes you have to take initiative - changing lanes in heavy traffic, for example. you can't always just wait for a hole to open up (the exit is coming up too fast), sometimes you have to nudge your way in, but maybe the driver to your right isn't interested in letting you in, so you have to try the guy behind him.
or, merging when a lane goes away - sometimes it takes communication with other drivers (hopefully polite!) to make it work. four way stops - the rules are simple but you often have to make eye contact with other drivers and figure out the order with hand signals and smiles. emergency situations.
yes, if all cars are driverless and they're on their own roads, a lot of that goes away. but we are definitely not going to build an entire secondary, door-to-door road system in the US just for driverless cars. there's no room, and we can barely maintain the one we have.
Posted by: cleek | December 05, 2014 at 09:24 AM
A good TED talk, mostly about neuroscience, but it does touch on why motions that seem simple are hard for robotics. Overall worth watching:
http://www.ted.com/talks/daniel_wolpert_the_real_reason_for_brains?language=en
Posted by: thompson | December 05, 2014 at 09:38 AM
Controlled falling off cliffs: a technology mastered long before parachutes. You can always choose when to jump. Damn that Da Vinci for rhetorically confusing the issue and claiming credit for what already existed.
Pathetic.
Posted by: Porlock Junior | December 06, 2014 at 06:24 AM
True, though, about building cars with automatic enforcement of existing speed limits and all. It would be a mess. It would show that a system built specifically for driverless cars would need to be designed to use their specific strengths.
Try to imagine the process of merging in traffic when all vehicles are comepetently and rationally driven by ego-free entities with full information about traffic conditions in all directions. It would be unrecogizable, scary as hell to look at with one's implicit assumption of human drivers, intolerable to people who *like* driving as a high-risk game and doinance display. Also extraordinarily fast and safe.
A nice technology. But cleek is no doubt right that building such a thing is wildly unlikely, or just impossible, for reasons outside the technology.
Posted by: Porlock Junior | December 06, 2014 at 06:39 AM
We'll get driverless cars that can handle interacting with humans. Then human driving will be discouraged, and the upcoming generation largely won't bother learning. And, THEN, with very few human drivers about, human driving on regular roads will be banned, and the car software will be switched over to the efficient "every car is like you" mode.
And a week later some Nigerian hacker will kill 10% of the US population by programming all the cars to reenact Maximum Overdrive. Which is my way of saying, the biggest obstacle to the vehicles gaining all that extraordinary efficiency by talking to each other, is the vulnerability to hacking that would result.
Posted by: Brett Bellmore | December 06, 2014 at 06:50 AM
over here the speed limits are set the way they are because it has been observed that people drive a certain value (10-11 km/h on average) above it.
Here, speed limits are set based on what is a safe speed to drive regardless of (normal variation in) weather conditions. That is, if a curve is safe at 45 mph in dry conditions, but only at 30 mph in the rain, the limit is set at 30. Which is why people speed -- they know that the posted limit is well below the safe speed. And it saves the traffic police from having to argue about whath the "maximum safe speed" under the prevailing weather conditions was. If you are over the posted speed, there's nothing to talk about. (But in normal conditions they simply don't bother to ticket higher, but safe, speeds.)
Posted by: wj | December 06, 2014 at 12:23 PM
I look forward to the driverless getaway car.
Not to mention the riderless bicycle built for two.
Driverless motorcycles?
Could you program your driverless car to leave the scene of an accident?
What if the passenger in a driverless car is the headless horseman?
Posted by: Countme-In | December 06, 2014 at 12:48 PM
Well then he would hardly be a horseman anymore now would he Count?
Posted by: JakeB | December 06, 2014 at 02:12 PM
Unless he is a headless centaur.
It may be difficult to distinguish between a headless horse and an..eh..abbreviated centaur though.
Posted by: Hartmut | December 06, 2014 at 04:01 PM
"It kind of is what we see happening now, actually. I'm about 6y out of date with NLP, but when I was last in school that was the domain I was enmeshed in, and the areas with "hard" problems which saw very rapid progress in the last two decades generally were the ones who eschewed traditional rule-based AI approaches and just went with statistical approximations based on machine learning and very large data sets."
That's just a different brute-force approach that is as far from approximating how humans think than the earlier NLP approaches were.
Posted by: Scott P. | December 07, 2014 at 02:28 PM
I don't disagree, but it's a brute-force approach that works very well in an awful lot of circumstances. Beyond that, there's no specific reason why an NLP - or even AI - system should necessarily solve problems in the same manner a human would, not least because we don't necessarily have a good rationalist explanation of the precise process we undergo to solve the problems ourselves. Even if we did, though, there's no reason to assume the same process would work as well or better on entirely different "hardware".
I'm sympathetic to the rationalist position, and I think it's important for the understanding of our own cognitive processes, but it seems unjustifiably arbitrary to decree that "true" NLP or AI can only exist if it's aping ours precisely.
Posted by: Nombrilisme Vide | December 07, 2014 at 04:03 PM
Though we'll need that sort of understanding to boost our own intelligence much. Which we really need to do, having created a civilization so dependant on higher than average intelligence just to maintain it. As a species we've about reached our Peter Principle limit.
Posted by: Brett Bellmore | December 07, 2014 at 05:24 PM
JakeB: That Farmer story ("The Shadow of Space") was actually originally written as a rejected episode treatment for Star Trek! It's fun to imagine the alternate-universe version of Star Trek that would actually have run it.
Posted by: Matt McIrvin | December 09, 2014 at 09:58 AM
...Lately, I've noticed a major rash of "why does the future suck"/"why did progress stop in the year [year]"/"where's my flying car?" stories. They usually blame it on something political they don't like.
A common feature of these articles is a tendency to assume that the futurism in mid-20th-century science fiction and magazine articles actually made sense, and the problem is with the world. I see no reason why people should assume this.
Old science fiction from that are usually imagined transportation technology vastly more capable than what we have today. I think this is just because, when it was written, transportation technology was in an exponentially advancing phase. But this doesn't happen with any technology forever, and in reality transportation hit a shoulder sometime around 1970, and progress since then has been more incremental (and largely driven by large advances in computer and communication tech). Since transportation is the paradigmatic sign of Progress in old SF, naturally people attuned to old SF interpret this as Progress having stalled.
The thing I notice is that old predictions about communication technology are remarkably often either right on target or too conservative. It's one area where reality really delivered. The "where's my flying car?" articles always have to have a bit about how the Internet and smartphones are stupid and make you stupid so they don't count.
Posted by: Matt McIrvin | December 09, 2014 at 10:06 AM
The biggest hazard for science fiction writers (as the best ones freely acknowledge) is the tendency to assume the continuation of current trends. Which can be fine for "if this goes on" type stories. But leaves little space for truly disruptive innovations -- c.f. http://www.businessweek.com/articles/2014-12-04/kitty-litters-invention-spawned-pet-cat-industry ;-)
Posted by: wj | December 09, 2014 at 10:17 AM
"Old science fiction from that are usually imagined transportation technology vastly more capable than what we have today."
This is fair enough. Think about Nerva, Orion. Then look at Rosetta, dead because it ended up in a shadow, when a simple RTG would have kept it running.
The old SF writers imagined a lot of things, but the one thing they never imagined, because it was too alien to their own personalities, was that we would have the tools to conquer the solar system, and suffer a failure of nerve about using them. They never imagined that human civilization would chicken out.
Posted by: Brett Bellmore | December 09, 2014 at 12:29 PM
Too right, Brett.
See also this http://aeon.co/magazine/science/why-has-human-progress-ground-to-a-halt/ on the impact of our failure of nerve generally and its impact.
Posted by: wj | December 09, 2014 at 12:38 PM
@Matt --
In that case, the third body in the story would have had to have been a red-shirted security officer, no doubt.
Posted by: JakeB | December 09, 2014 at 01:23 PM
It's hard to complain about the lack of technological progress when you can carry in your pocket devices who's functionality use to weigh tons and fill heavily air conditioned rooms thousands of square feet in size.
Posted by: CharlesWT | December 09, 2014 at 04:29 PM
So.
To me, the overall biggest failure of prediction was that of failing to appreciate just how little technology was going to do along the lines of reducing human drudgery or making life overall more satisfying.
Gibson was closest, but his idea of how computer networks and human/machine interfaces would work turned out to be not very accurate. But they sure were fun to read about.
Posted by: Slartibartfast | December 09, 2014 at 06:18 PM
A bit late for this conversation, but there is an invention that no one seems to be predicting, which has somewhat has already started:
People constantly recording their own life.
And I don't just mean when things are interesting, I mean *all the time*. Sent immediately somewhere else. Probably encrypted with a key only they (And a trusted third party in case they're murdered) know.
This is going to make all sorts of changes in crime. No, not because of criminals recording themselves, presumably they'd be smart enough to turn it off...but how about almost everyone being able to produce an alibi, on demand?
You think 'Oh, people would fake them'...but, CGI is more time consuming than you think, and if they're stored on third-party servers people have to fake them in advance so they can put them up in real time. So all anyone would have to prove an alibi is to go forwards or backwards far enough in the video until you find *someone else*, and that other person's video confirms you were actually there.
Seamlessly switching to a pre-rendered CGI and then back away, in real time, is not plausible right now.
And, of course, *victims* would have recorders running, also.
As would cars, filming the driver and recording the location. As would houses, recording who enters.
People only care about privacy when the information is out of their control. Give people the chance to record all this stuff *themselves* and put it somewhere they know it's secure, they will.
So within something like 30 years, crime is going to completely change. Completely and utterly.
AND NO ONE IS TALKING ABOUT IT.
Posted by: DavidTC | December 17, 2014 at 12:18 AM