Self-driving cars

Interesting. However I think that unless we ban drivers, we’ll still need Stop signs. Some of them are used to give priority to traffic on the main street, and having all cars take their turns in this situation might be bad for traffic flow. I don’t think you could expect every car to have up to date information on this - it is easier to provide visual cues. Ditto for traffic lights, which often are timed globally.

There is another big market for these cars. My father in law gave up his car about five years ago when he was well over 90 because he didn’t trust himself to get confused at the wrong moment. Though he lives in a residence, it is often a pain if he needs to go someplace. There are taxi services, but they are out in the sticks a bit - though they do have curbs. A self driving car would be a big help to him.
Proto-geezers like me will have plenty of money for them in 20 years or so, and I fully expect to be driving around in one when I hit 90. It will certainly cut down on the number of people who mistake the accelerator for the brake and plow into stores.
People like me will be a big market in 20 years.

I’d imagine there’d be a herd immunity effect. If you’re the only self-driving car on the road then you’re still dead when a drunk jumps the center line and smashes into you with no warning. But if he had a self-driving car there’s no problem. All the way to the point where being the only human driver on the road could be kinda fun, since you could swerve all over the place and be a total jerk without fear of being rear ended.

Last I checked, self-driving cars suck in bad weather or anything that limits visibility for their sensors, especially snow. That’s why they stick to California.

True. The cars have been in accidents, but it has always been the fault of the humans driving the other car.

It’s been raining recently (at last!) so I’ll keep my eyes out for them. But my understanding is that they still have work to do in bad conditions. On the other hand, as a veteran of the snow belt, people have some work to do also.

“I’ve just picked up a fault in the AE-35 unit. It is going to go 100 percent failure within 72 hours.”

Before I saw this comment, I was thinking:
This is going to be just like those arguments against artificial intelligence. Every time an advance is made, someone replies that it isn’t “real AI”, because when you look inside the box it’s just sophisticated pattern matching and stuff. What the heck to they think real intelligence is in comparison?

Likewise, we already have self-driving cars in many ways. Traction control, self-braking, automatic parking, adaptive cruise control, lane following, and so on are all small steps along the way to comprehensive self-driving. And at each step someone will come along and say that it isn’t real self-driving, because it can’t handle some condition that a human probably couldn’t handle any better anyway.

The fact that you brought up chess just shows how ridiculous the argument is. The fact that chess programs are just sophisticated pattern matchers is utterly irrelevant to the fact that they beat all humans no matter how good they are. And the difference is only diverging: the chess programs get better every year, and the humans mostly don’t.

The same will be true of self-driving cars. It doesn’t matter that the computer doesn’t have “judgement”, because (as with chess) it will be so far superior in the basics that it won’t need judgement (never mind that humans can’t even exercise judgement in a 250 millisecond timespan). It won’t have to decide to swerve left or right to avoid that car because it won’t have gotten in a potential rear-ending situation in the first place, and even if it did it would do far better than a human, applying the brakes instantly and at full power (compared to a human that might wait a half second and then not even brake fully).

Yeah, I think a reasonable starting point would be limited and exclusive, and probably urban – something like commuter lanes are now, or perhaps an “el” type situation.

My grandfather was a traveling salesman for years. The happiest part of retirement was being able to stop driving. If he’d ever had the chance to buy a self-driven car, he would have been the second person in the country to do so… (the first one would have been pictured on the newspapers, which he didn’t like being).

Exactly.

Of course, this could be accomplished with trains. But, there are areas of the US where the people are infatuated with private box cars. It is not difficult to imagine this scenario.

What about rubber bullets? Tasers?

50 years from now people are going to look back at our times, note the 33k annual death rate, and compare it with trans-Atlantic crossings of the 18th century. “Grandpa, how did you ever get in a car and drive? It was so dangerous”.

No one is going to build any new roads for these cars, and our commute lanes are getting to be carpool or pay. And are crowded. On my way home I typically go a lot faster than the carpool lanes.

But are you afraid that driverless cars are going to hit someone or be hit by someone? The latter already happens, and isn’t a big problem. The former will no doubt happen also, but the cars won’t be sold generally until there is a lot more experience with them than there is now, and not before their safety record is a lot better than cars with drivers. At that point not letting them on the road will be effectively killing people.

Mapping will not be the issue - how quickly will roads be mapped to 100% accuracy once cars start communicating with a central server? Which I absolutely believe will be the case in any large scale autonomous driving environment.

Even for the person that lives at the end of a dusty, 3 mile private drive - how quickly will that driveway be mapped once he buys a self driving car? Three or four trips, with the average value taken would do it pretty much perfectly? How many roads are there in the US that would not be traversed at least once a day?

The real problem will be the truly random events, the ones that even human drivers don’t handle well and the ones that an autonomous car will be expected to handle perfectly - the kid sprinting our of a driveway, the human driver changing lanes without warning, the sudden pothole throwing the car offline…the deer running across the road.

Once the technology can handle these demonstrably better than a human, you will be golden.

I don’t think the US will lead in this - I think it will be somewhere like Beijing or Bangkok - where you will have two considerations present
a) Traffic snarls making human drivers totally insane and waste huge amounts of time (they can do something more productive in autonomous cars)
b) A very large metropolitan area that people very seldom leave - so you will have (for example) one car for daily commuting, a small city car - and a second car for weekends.

This is the reality of the dopey situation in the link. If the car can turn left into oncoming traffic, AND turn right off the cliff, it will do neither. It will turn left just enough and right just enough to remain right in the middle of its lane, making adjustments by the millisecond.

The fact that the engineered situation doesn’t actually force the computer to choose what kind of accident to have illustrates the essential point. Computers are going to be so much better at driving than humans, that it is actually very difficult to engineer a situation in which the computer is forced to make a killing decision.

“I say probably because we [del]could[/del] have [del]knucklehead[/del] **corrupt **legislators pass laws that won’t allow companies to lower insurance rates for self-driving cars or make them illegal.”

Fixed your answer for ya!

What about them? I don’t get it.

Don’t they protect others at the expense of officers? “Protect” others at least to some extent? Yet we use them right?

First off, this username/post combination is very disorienting.

That said, I would expect truly autonomous cars to have lots of redundancy, so that faulty systems can be bypassed and also flagged for R&R.

I think part of the theory is that at some point, self-driving cars are expected to be able to communicate with other self-driving cars nearby to exchange information and make collaborative decisions. 20 cars approaching an intersection would co-ordinate with each other, adjust their speeds, and pass through, around each other, with minimal delay. This means they would also have to be able to identify any non-networked traffic (peds, bicycles piloted cars, cattle) and adapt the overall networked traffic flow to account for these nuisances (and, unlike human drivers, not get pissed off by them).

In the decision examples, self-driving oncoming traffic will be warned that a panic-like maneuver will be crossing their lane of travel, so that braking or evasive action can be done. Insurance companies will all but force drivers to at least install warning devices, if not responsive override systems, that will listen to automated car chatter. And black boxes, to insure that the systems do not get turned off.

Or, the industrial economy will collapse and we will end up relying on self-driving horses.

“Mein Führer, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.” :slight_smile:

Indeed. Also, communication back to “base”. Teslas already have this to some extent; telemetry readings are sent to their service department and many problems can be diagnosed remotely.

I think this is a really cool concept (though perhaps a bit scary as a passenger) but ultimately not necessary for useful self-driving cars.

Consider the case of a line of cars on the freeway where the first one stops hard for some reason. What frequently happens in real life is that the first car is fine, as is the second, but for each subsequent car the human response time eats into the margin, and once you get a few cars back there’s a rear-ending. There’s absolutely no reason for this since there’s plenty of room if all cars could brake at the same time, or at least within a few milliseconds of each other. That’s entirely possible even without car-to-car communication; just hitting the brakes as soon as the brake lights for the car ahead come on (and using radar to compute braking strength) would be effective.

There are plenty of similar scenarios that could be prevented just by having a little bit faster reaction time. And in fact, just having a few self-driving cars among the others would help everybody; that extra reaction time has a cascade effect on all nearby vehicles.

FYI, Agel’s book on the making of 2001 has a picture of the briefing room on Clavius base with Peter Sellers as the good doctor cackling in the foreground.

I don’t think that self-driving cars are going to be a panacea against accidents. Without a doubt the first generations will have to be better than the drunk or the inattentive, and probably better than the average driver in most situations. The current level of assist you get in cars is pretty good at keeping the car under control when you’re navigating around. But when it gets to the point where the car is doing the navigating, they’ll almost doubtlessly create new opportunities for problems.

Even the current assist tools took awhile to get to their current state. About 15 years ago my father in law rented a luxury RWD car for a trip, and ended up driving back in a bad ice storm. The car had an early form of traction control, so they were safe. However, they were stuck several times on small bumps in the ice because the car would refuse to spin the tires and climb over the bump with the traction control on (it would cut the throttle, not just apply brakes to the spinning tire), and it had no method to disable it. Newer systems don’t behave like that one does, probably because it was useful only in limited situations. Production traction control systems were around 20 years old at that time.

That problem arose just from removing the ability to do something, I can imagine that the first few self-driving cars will have an unexpected issue or two, (like maybe a temporary, un-mapped construction detour that doesn’t look like what the computer identifies as a “road”). Lord knows that the early experimental versions of self-driving cars were outright hilarious, to the point of making it seem that teenagers on acid might actually do better. To expect the first commercially available versions of driverless software to have NASA like reliability seems unreasonable to me. OTOH, it still just has to be better than the average person in 99% of the situations, and it will be doing great.

As far as cars that are communicating with each other while going down the road - I’ll probably start trying to figure out how to make my 45 mile commute by bicycle if that becomes commonplace. Networked computers are by definition receiving inputs from the other computers in the network. That means that eventually through either malfunction, incompatibility, or maliciousness*, that computer is going to receive crazy data that makes it do odd things. I’d prefer not to be hurtling down the freeway in a pack of those when things go wrong. A pack of monkeys hurtling down the freeway is bad enough, a set of their recipes talking to each other and doing it for them is going to fail spectacularly eventually.

*<Bored Teenager>Hey, wanna go down to the overpass and DoS all the cars going down the freeway?</Bored Teenager>