« Marcus Welby, meet Siri | Main | Can police culture be changed? »

December 03, 2014

Comments

Always in motion is the future.

McNabb was wrong about waistlines of the future wrong too.

And what's with the teenager (?) on the right sitting butt-naked on the kitchen counter not three feet from the giant fruit?

Were they predicting bacteria- and microbe- resistant surface materials?

I don't think any 50's era SF gets anywhere near a reasonable prediction about computers and computing technology.

I grew up on that stuff. I was born in the 60's and spent a lot of of my misspent youth with Asimov, RAH, Bradbury and the like. I remember the first time I saw the big wall of SF books in my Junior High School library, I resolved to just start at A and work my way to Z.

It's funny now to look back and see what their assumptions were. A talking, humanoid robot that vacuums the floor and sets the dishes? Totally believable. A pocket-sized computer cheap enough for everyone to have one? That's just crazy talk.

It makes me wonder, what are we missing now? What great revolution is just over the horizon that we just don't see?

Or germ-free teenagers?

I copy my own comment, from the May 2014 world-building thread, because I am too lazy to type it again:

"There was a Philip Jose Farmer story written in the '60's in which a spaceship, travelling beyond the speed of light, bursts out of our universe, and then a crewwoman goes mad and throws herself out of the ship. Because the ship and body are outside the universe, they all have roughly the same mass and so they and the universe are all in orbit around one another. The ship's computer is madly spitting out punch cards as it tries to provide sufficiently good estimated solutions to the three-body problem that the crew can determine whether the ship will run into one of the other objects."

"What great revolution is just over the horizon that we just don't see?"

The wearable martini patch?

The surround sound group orgasmatron with earth moving-simulation gyroscopes?

Driverless police cars, so the cops have their hands free for the more efficient drive-by shootings, instead of this time-consuming, steering, braking and turning off of the ignition.

The pocket drone?

Guns with sensors that detect human flesh and jam, no matter what?

The virtual gin and tonic?

The disposable husband, in bulk?

The automatic in-bar-stool flatulence suppressor.

Replacement brothers.

A kissing booth app on your smartphone?

The carless driver?

Heinlein predicted the Really Good Battery, and that still hasn't happened.

Oh.

I was (and still am) a Philip K. Dick fan and he got a number of things right. Of course, he always imagined dystopias, so he had a bit of an advantage right there...

Philip K. Dick makes me want to dial a 3.

The order in which these things came out looks obvious and pretty well inevitable. Now. But how far back does the inevitability date?

You could start, obviously, at 1982, when AutoCAD came out; clearly it had won. You could also argue that AutoCAD of 1982 wasn't the AutoCAD that Dr. Science is talking about, in either a technical or a marketing or societal sense; and you'd have a point. (It is not easy for me to get my mind around the fact that the program is an entire generation old, and I could say to my grown son "This is not your father's AutoCAD.")

Still, any reasonable choice for the AutoCAD date leaves it well ahead of Roomba. In 1981, say, was that obvious? Or earlier?

I'd say it was, but am not unbiased: having been involved in that 1982 product, I would seem to have had a positive assessment of its feasibility and at least the remote possibility of its success. I leave the question to somebody else.

As to the driverless car, I remember from long ago a comment by Norbert Wiener, who had so much to do with the origin of automated systems. It may have been in _The Human Use of Human Beings_ (1950/54, read by me a little later) or maybe _God and Golem Incorporated_ (1964) that he expressed a desire concerning the first driverless car that would go on the road: not to be in it. And the reason, as I recall, was essentially that in the OP: driving is not that easy a job for a computer.

One more anecdote to follow, to avoid making this too long.

In the mid-late 1960s the Stanford AI lab was working on an automaton that would solve a puzzle that was popular in those days before Rubik. You dump a set of four multi-colored blocks on the table and stack them squarely so that each side of the column is entirey of one color. The task was to solve the puzlle from start to end: The computer is to look at the blocks with its TV camera, solve the problem of how they need to be turned over and stacked, and then stack them.

I attended a seminar on this; it must have been in 1967 when I was at Berkeley, when they had progressed pretty far with it. What impressed me was the relative timings of the steps. Figuring what was where from the camera pic took some computing time; solving the problem took very ittle; and then it thought for a long time to figure out the arm motions required to actually pick the things up and stack them.

They're much better at that now, obviously. But the different rank-orderings of tasks for computers and for nervous systems was getting pretty obvious.

Porlock jr, a guest post with your name on it?

natural language processing is ridiculously complicated, using today's programming techniques anyway.

my day job involves automated mailing address standardization. address in -> standardized address out + post office discount for making their life easier.

and even something as simple as a three-line US address can be munged up in ways that make it impossible to parse into a real address using a solution that can also parse all other addresses. but when a person looks at one of these borked addresses, the intended destination is obvious - "yeah, that '3' is actually the unit number, not the last digit of the primary number." but telling the code how to handle this situation, while not breaking every other address, can be nearly impossible at times.

i think a lot of it has to do with the way we write programs. hard logic with lexing separated from parsing and parsing separated from the domain knowledge (the USPS list of addresses) is wrong. humans don't work that way. we don't go through an address character by character, splitting it into tokens and then carrying those tokens over to our list of street names and numbers.

but that's how today's programming languages force you to handle it.

i think, if we're ever going to get AI, it's going to happen using a programming paradigm that's vastly different from what we've been doing since the 1960s.

Does anyone recall when 3-D printing was predicted? That seems to me like one of the great leaps forward that got generally missed.

It's interesting to contrast the (slow) development of self-driving cars and the development of drones, some of which seem to be semi-autonomous.

I guess the lesson is "progress can be rapid, if you don't care about collateral damage"

natural language processing is ridiculously complicated, using today's programming techniques anyway.

[...]

i think, if we're ever going to get AI, it's going to happen using a programming paradigm that's vastly different from what we've been doing since the 1960s.

It kind of is what we see happening now, actually. I'm about 6y out of date with NLP, but when I was last in school that was the domain I was enmeshed in, and the areas with "hard" problems which saw very rapid progress in the last two decades generally were the ones who eschewed traditional rule-based AI approaches and just went with statistical approximations based on machine learning and very large data sets. And as much as my rationalist mind is loathe to admit it, it works. The impact of the statistical revolution in language technology over the last 20 years is nothing short of amazing. So while it's not a panacea, the paradigm shift you're talking about has already happened in a lot of NLP sub-domains.

Progress can appear rapid if you have a very small number of other objects moving about the same space. And those unlikely to take random and abrupt changes of direction. (No animals or small children piloting aircraft. And the drivers are less likely to do something crazy, too.) Hence the ease of programming basically autonomous drones.

the drivers are less likely to do something crazy

that's the part that makes me roll my eyes whenever someone says driverless cars are right around the corner.

the hard part about driving isn't keeping your car on the road; that's just a matter of seeing where the road is and knowing how a steering wheel works. the problem with driving is all the other drivers, and cyclists, and pedestrians, and deer - all the unpredictable, illogical, autonomous stuff out there with you.

driverless cars that share roads with people will require situational awareness and a pretty deep understanding of human behavior. that's much harder than staying between the lines and watching for red lights.

@NV --

You might have already seen this, but here's a link to an old Language Log post that points out that an aggregate bigram model (a simple model in terms of machine language learning) gives an estimate that the sentence "Colorless green ideas sleep furiously" is far more likely to occur in language than is "Furiously sleep ideas green colorless"; these of course being the sentences that Chomsky used in his successful argument that semantics and syntax should be treated entirely distinctly, since the first sentence is grammatical but the second is not.

"The order in which these things came out looks obvious and pretty well inevitable. Now. But how far back does the inevitability date?"

Think about it ahead of time:

1) 'Auto CAD' requires that the computer store sets of coordinates (I'll say 4-D, to allow time) for points, to draw lines and shapes. Many of which would be simply plugging in a template, such as circular cross-sections. Even back in the day, that's not that much data. No interaction with a real-time environment is needed.

2) A roomba involves moving around a far more simple and constrained environment than a car would encounter.

3) A car means being able to deal with a highly complex, constantly changing environment, with few rules governing the players, and complexity includes things like light and traction.

"driverless cars that share roads with people will require situational awareness and a pretty deep understanding of human behavior. that's much harder than staying between the lines and watching for red lights."

Yes, the fastest route to driverless cars is to make ALL cars driverless or, at a minimum, make a driverless car environment available.

yep.

but a driverless car that can't leave its environment is basically a 5-person train car. and having a bunch of small vehicles all driving the same set of constrained routes is inefficient. you might as well just lay train tracks.

Car insurance is going to be very interesting with the driverless car.

What happens if a perfectly sober person driving a traditional car collides with a driverless car and its drunk passenger passed out in the back seat?

I think we're going to need some new laws, but as Brett and other will tell you, then the cops will forced to kill a lot of people.

I think Heinlein's shortcoming, in general, was failure to anticipate how really difficult artificial intelligence and learning would turn out to be. Robots and automatic, adaptive machines of all kinds suffer. The Moon Is A Harsh Mistress was I think the worst of his sins.

I am not sure any skiffy writers of that time did, actually. Asimov put his thinking robots so far in the future that it's at least plausible, given what we know now, but not everyone did.

Clarke's HAL was I think quite a bit more complicated than we could have managed in 2001, in terms of parsing and autonomy.

I am assuming the first step is hybrid driverless cars with driverless interstates, maybe real hov lanes that are driverless. Some controllable environments.

Although, while driverless cars may "cause" accidents, avoiding them, or more likely,choosing between accidents is not as complex as one might think.

If all cars are driverless, then they can all signal their intentions to their near neighbors, who will then have more time to respond. Easier-peasier. Swarm control is harder than you'd think, but it's a lot easier when the "swarm" is limited to some small integer number of columns, changing columns as needed.

You'd definitely need roads that were altered to accommodate driverless. One of the hard parts of the driverless convoy, in the middle east, is being able to tell where the road IS. So even though driverless convoys are following the leader (the lead vehicle has a human driver), staying in the lane can be difficult.

It's when driverless cars mix up with human drivers that things get dicey. All of one, or all of the other is the simplest way to go.

The estimates about chess playing computers were not that far off given that the predictions were made around the time the first real computers were built. At the predicted time computers had at least reached the master class but not yet surpassed all human players. But I still think they overestimated AI and underestimated raw computing power, i.e. they thought it would be mainly the former and not the brute force approach that has (until now) dominated the field.
Poe was totally off there though. He believed that any chess automaton would be unbeatable once it had grasped the rules. That was his one faulty argument why the Chess Turk was a fraud (while every other argument in his famous paper on that turned out to be true).

we'll have FTL transporter technology before we have safe driverless cars that can go anywhere on-demand (which is the entire draw of the car).

The estimates about chess playing computers were not that far off given that the predictions were made around the time the first real computers were built. At the predicted time computers had at least reached the master class but not yet surpassed all human players.

I thought that Kasparov, after having lost to Deep Blue (?), should have invited the computer to share a sandwich with him.

@liberal japonicus
Thank you, sounds like a good idea. But I don't think I could get it together just now, being laid up with an infection that gives me lots of leisure to read stuff on tne Net but rather less mental acuity. (I assume you have my e-mail address in the site records, though; send a note if you want to pursue anything.)

How pleasant: several people seem to have come up with the same idea I've had lately, that driverless cars could be useful in an environment that lacked those flaky hominids.

Where I differ with cleek is that driverless roads could be very useful, particularly in the hybrid form that Marty describes. I think they could be designed -- financing this is another matter -- to be very fast and fuel-efficient and safe, which could be appealing.

@Hartmut -
"The estimates about chess playing computers were not that far off given that the predictions were made around the time the first real computers were built."
Hmm, might depend on one's preferred value of Not that far.

There was a longish period, it seemed to me, when high-level computer chess seemed rather like controlled fusion: always 20 years or so in the future. That ended, of course, for one of those 2 fields.

No doubt, though, about the bad predictions of the effect of clever algorithms versus raw power. That lesson was becoming clear to many in the AI field by the late 60s, but it still took some time to get to the Deep Blue level and beyond.

At this point, controlled fusion is minus 62 years in the future, by my estimate. A lot of people just don't like the form it arrived in.

I suspect SF writers have used a lot of futuristic technology as plot devices without giving much thought to whether it could ever become a reality. Or when.

Chuchundra: "What great revolution is just over the horizon that we just don't see?"

Ignoring ray-guns and such, lasers and their impact were just about totally unforeseen. Now, the direct and indirect use of lasers makes up a large percentage of the world economy.

JakeB: "...solutions to the three-body problem..."

We now know that three-body problems can't be solved, though limited, and perhaps useful, predictions can be made.

cleek: "...pretty deep understanding of human behavior."

Except for, perhaps, dealing with passengers, a driverless car may not have to understand anything about human behavior. It need only accurately sense the things in its environment that impact the effective and safe maneuvering of the car. It can just about totally ignore everything else. Driving seems so complex to us because we are aware of, and distracted by, so many things that have nothing to do with driving the vehicle. Also, a driverless car would have a very much smaller reaction envelope than a driver has. It can detect a change in its environment and execute the best reaction to it before a driver even becomes aware of the change. An advantage of driverless cars is that when some improvement is made in driving technique, all of them can know about it almost instantly. This would be a huge improvement over the current practice of training from scratch every single hormone saturated driving unit.

Will driverless cars be programmed to obey speed limits?
If so, will libertarians ever buy them?

--TP

"There wasn't any speeding even though, ironically, Google's engineers have determined that speeding actually is safer than going the speed limit in some circumstances.
[...]
Google's driverless car is programmed to stay within the speed limit, mostly. Research shows that sticking to the speed limit when other cars are going much faster actually can be dangerous, Dolgov says, so its autonomous car can go up to 10 mph above the speed limit when traffic conditions warrant."

Do Self-Driving Cars Make Speed Limits Obsolete?

Google Testing Its Self-Driving Cars In A Complete Virtual "Matrix California": They must have taken the red pill.

"At this point, controlled fusion is minus 62 years in the future, by my estimate."

Yeah, sure, except, as you surely know, the "controlled" in that phrase is meant quite specifically to differentiate one that, once started, is subject to modification of its reaction rate and duration -- control, as one might say -- from a fusion reaction which, once started, is not subject to any further control till it's damn well done -- a sort of big bang reaction, if the phrase hadn't already been taken.

So, you control when the reaction happens by when you start it, and the size of the reaction by which fuel pellet you use. There are even fusion 'bombs' which can have their yield 'dialed', by injecting greater or lesser amounts of fuel into them. It isn't as though there weren't useful chemical processes in industry which are no more capable of being modulated once you start them.

It's still controllable enough that we could have had fusion powered spacecraft vastly superior to anything we have now, and fusion power plants, if the word "bomb" didn't make some people go all wonky.

"Will driverless cars be programmed to obey speed limits?
If so, will libertarians ever buy them?"

Would ANYBODY buy them? It isn't as though the majority of people obey speed limits. It's well known that speed limits are deliberately set below the safe, natural speed on a road, to enhance revenue. They set them with the intention that they be routinely violated, so they can fine people when they want the money.

Having self-driving cars that had to observe all speed limits would amount to a full time nation-wide rolling blockade. It would be so disruptive that... Hm, maybe a good idea after all, we might get the speed limits set rationally!

Porlock, if the spirit moves you, send a text file to the email under the kitty. I'd love to know more about the process of putting together that first AutoCAD program.

Brett, over here the speed limits are set the way they are because it has been observed that people drive a certain value (10-11 km/h on average) above it. So, if the authorities think that 60 km/h are safe, they put a 50km/h sign there. If they put a 60 sign there then people would drive 70. People do not usually ignore the signs (unless they seem stupid at face value) but read them more or less as: That's the speed you are expected to have (lower limit). The police is expected to stop only those that go more than 11 km/h faster and those that go too far below. So, if the sign says 60 and you drive only say 30 without a sound reason (and thereby disturb the flow of traffic), you'll get fined, and you'll get a speeding fine at 72.
In the big cities the police announces in advance where they will put the mobile speed traps.
In the past the revenue creation played a role but not anymore or for too long (iirc there were some court decisions about it).

It need only accurately sense the things in its environment that impact the effective and safe maneuvering of the car.

which is, in itself, a monumental task: object identification and tracking, at high speeds, with ever-changing lines-of-sight.

but you can't just react your way through traffic. sometimes you have to take initiative - changing lanes in heavy traffic, for example. you can't always just wait for a hole to open up (the exit is coming up too fast), sometimes you have to nudge your way in, but maybe the driver to your right isn't interested in letting you in, so you have to try the guy behind him.

or, merging when a lane goes away - sometimes it takes communication with other drivers (hopefully polite!) to make it work. four way stops - the rules are simple but you often have to make eye contact with other drivers and figure out the order with hand signals and smiles. emergency situations.

yes, if all cars are driverless and they're on their own roads, a lot of that goes away. but we are definitely not going to build an entire secondary, door-to-door road system in the US just for driverless cars. there's no room, and we can barely maintain the one we have.

A good TED talk, mostly about neuroscience, but it does touch on why motions that seem simple are hard for robotics. Overall worth watching:

http://www.ted.com/talks/daniel_wolpert_the_real_reason_for_brains?language=en

Controlled falling off cliffs: a technology mastered long before parachutes. You can always choose when to jump. Damn that Da Vinci for rhetorically confusing the issue and claiming credit for what already existed.

Pathetic.

True, though, about building cars with automatic enforcement of existing speed limits and all. It would be a mess. It would show that a system built specifically for driverless cars would need to be designed to use their specific strengths.

Try to imagine the process of merging in traffic when all vehicles are comepetently and rationally driven by ego-free entities with full information about traffic conditions in all directions. It would be unrecogizable, scary as hell to look at with one's implicit assumption of human drivers, intolerable to people who *like* driving as a high-risk game and doinance display. Also extraordinarily fast and safe.

A nice technology. But cleek is no doubt right that building such a thing is wildly unlikely, or just impossible, for reasons outside the technology.

We'll get driverless cars that can handle interacting with humans. Then human driving will be discouraged, and the upcoming generation largely won't bother learning. And, THEN, with very few human drivers about, human driving on regular roads will be banned, and the car software will be switched over to the efficient "every car is like you" mode.

And a week later some Nigerian hacker will kill 10% of the US population by programming all the cars to reenact Maximum Overdrive. Which is my way of saying, the biggest obstacle to the vehicles gaining all that extraordinary efficiency by talking to each other, is the vulnerability to hacking that would result.

over here the speed limits are set the way they are because it has been observed that people drive a certain value (10-11 km/h on average) above it.

Here, speed limits are set based on what is a safe speed to drive regardless of (normal variation in) weather conditions. That is, if a curve is safe at 45 mph in dry conditions, but only at 30 mph in the rain, the limit is set at 30. Which is why people speed -- they know that the posted limit is well below the safe speed. And it saves the traffic police from having to argue about whath the "maximum safe speed" under the prevailing weather conditions was. If you are over the posted speed, there's nothing to talk about. (But in normal conditions they simply don't bother to ticket higher, but safe, speeds.)

I look forward to the driverless getaway car.

Not to mention the riderless bicycle built for two.

Driverless motorcycles?

Could you program your driverless car to leave the scene of an accident?

What if the passenger in a driverless car is the headless horseman?

Well then he would hardly be a horseman anymore now would he Count?

Unless he is a headless centaur.
It may be difficult to distinguish between a headless horse and an..eh..abbreviated centaur though.

"It kind of is what we see happening now, actually. I'm about 6y out of date with NLP, but when I was last in school that was the domain I was enmeshed in, and the areas with "hard" problems which saw very rapid progress in the last two decades generally were the ones who eschewed traditional rule-based AI approaches and just went with statistical approximations based on machine learning and very large data sets."

That's just a different brute-force approach that is as far from approximating how humans think than the earlier NLP approaches were.

I don't disagree, but it's a brute-force approach that works very well in an awful lot of circumstances. Beyond that, there's no specific reason why an NLP - or even AI - system should necessarily solve problems in the same manner a human would, not least because we don't necessarily have a good rationalist explanation of the precise process we undergo to solve the problems ourselves. Even if we did, though, there's no reason to assume the same process would work as well or better on entirely different "hardware".

I'm sympathetic to the rationalist position, and I think it's important for the understanding of our own cognitive processes, but it seems unjustifiably arbitrary to decree that "true" NLP or AI can only exist if it's aping ours precisely.

Though we'll need that sort of understanding to boost our own intelligence much. Which we really need to do, having created a civilization so dependant on higher than average intelligence just to maintain it. As a species we've about reached our Peter Principle limit.

JakeB: That Farmer story ("The Shadow of Space") was actually originally written as a rejected episode treatment for Star Trek! It's fun to imagine the alternate-universe version of Star Trek that would actually have run it.

...Lately, I've noticed a major rash of "why does the future suck"/"why did progress stop in the year [year]"/"where's my flying car?" stories. They usually blame it on something political they don't like.

A common feature of these articles is a tendency to assume that the futurism in mid-20th-century science fiction and magazine articles actually made sense, and the problem is with the world. I see no reason why people should assume this.

Old science fiction from that are usually imagined transportation technology vastly more capable than what we have today. I think this is just because, when it was written, transportation technology was in an exponentially advancing phase. But this doesn't happen with any technology forever, and in reality transportation hit a shoulder sometime around 1970, and progress since then has been more incremental (and largely driven by large advances in computer and communication tech). Since transportation is the paradigmatic sign of Progress in old SF, naturally people attuned to old SF interpret this as Progress having stalled.

The thing I notice is that old predictions about communication technology are remarkably often either right on target or too conservative. It's one area where reality really delivered. The "where's my flying car?" articles always have to have a bit about how the Internet and smartphones are stupid and make you stupid so they don't count.

The biggest hazard for science fiction writers (as the best ones freely acknowledge) is the tendency to assume the continuation of current trends. Which can be fine for "if this goes on" type stories. But leaves little space for truly disruptive innovations -- c.f. http://www.businessweek.com/articles/2014-12-04/kitty-litters-invention-spawned-pet-cat-industry ;-)

"Old science fiction from that are usually imagined transportation technology vastly more capable than what we have today."

This is fair enough. Think about Nerva, Orion. Then look at Rosetta, dead because it ended up in a shadow, when a simple RTG would have kept it running.

The old SF writers imagined a lot of things, but the one thing they never imagined, because it was too alien to their own personalities, was that we would have the tools to conquer the solar system, and suffer a failure of nerve about using them. They never imagined that human civilization would chicken out.

Too right, Brett.

See also this http://aeon.co/magazine/science/why-has-human-progress-ground-to-a-halt/ on the impact of our failure of nerve generally and its impact.

@Matt --

In that case, the third body in the story would have had to have been a red-shirted security officer, no doubt.

It's hard to complain about the lack of technological progress when you can carry in your pocket devices who's functionality use to weigh tons and fill heavily air conditioned rooms thousands of square feet in size.

So.

To me, the overall biggest failure of prediction was that of failing to appreciate just how little technology was going to do along the lines of reducing human drudgery or making life overall more satisfying.

Gibson was closest, but his idea of how computer networks and human/machine interfaces would work turned out to be not very accurate. But they sure were fun to read about.

A bit late for this conversation, but there is an invention that no one seems to be predicting, which has somewhat has already started:

People constantly recording their own life.

And I don't just mean when things are interesting, I mean *all the time*. Sent immediately somewhere else. Probably encrypted with a key only they (And a trusted third party in case they're murdered) know.

This is going to make all sorts of changes in crime. No, not because of criminals recording themselves, presumably they'd be smart enough to turn it off...but how about almost everyone being able to produce an alibi, on demand?

You think 'Oh, people would fake them'...but, CGI is more time consuming than you think, and if they're stored on third-party servers people have to fake them in advance so they can put them up in real time. So all anyone would have to prove an alibi is to go forwards or backwards far enough in the video until you find *someone else*, and that other person's video confirms you were actually there.

Seamlessly switching to a pre-rendered CGI and then back away, in real time, is not plausible right now.

And, of course, *victims* would have recorders running, also.

As would cars, filming the driver and recording the location. As would houses, recording who enters.

People only care about privacy when the information is out of their control. Give people the chance to record all this stuff *themselves* and put it somewhere they know it's secure, they will.

So within something like 30 years, crime is going to completely change. Completely and utterly.

AND NO ONE IS TALKING ABOUT IT.

The comments to this entry are closed.

Blog powered by Typepad