Ramifications of Self-Driving Cars?

AND a human will recognize a child standing still right next to a tree on the edge of the road. Or the ass end of a frozen still deer that MIGHT decide to ruin your day. The humans appropriate response will be to slow down BEFORE it becomes an issue.

I used to drive a deer infested road every day, and some friends hit some deer. The ones you see are never the ones that get you. Cars don’t get distracted and cars would possibly react faster than people.
I don’t know if the sensors would be good enough to pick up a deer moving towards the road in a field, but eventually they will. When you make millions of the things, the price will come down and the capabilities will go up.

And the other point, of course, is one that’s already been made. It may be that humans will always be better than a computer at distinguishing between a kangaroo springing from behind a parked car and a child springing from behind a parked car in a time that makes it more possible for the human to take appropriate evasive action than it is for the computer. But this is a very narrow combination of circumstances which is not going to make a huge difference to the overall safety comparison between computer and human drivers. The fact that the computer never gets tired, or inattentive, or distracted, or intoxicated, or angry is going to be relevant in far more situations.

These kind of judgements are made everywhere. If the speed limit was reduced to 10 mph, casualties would become a rare event. That won’t happen for obvious reasons, but for every 10 mph extra, the casualty rate goes up in proportion. The balance between reasonable speed and acceptable casualties is one that every administration in the world makes.

It is not a stretch to equate this with SDCs. If there will still be casualties, but fewer of them, then they are A Good Thing.

I’m not sure where you see that, because in fact I don’t drive. I am, however, often a passenger on rural roads, which means I’m quite familiar with the potential obstacles you speak of. On rural roads, everyone in the car has to keep on the lookout for deer. Why? Because a single human is not adequate to the task of looking for them. That’s another argument for self-driving cars.

This only becomes tenable when honking is also automated, and is made “smart”.

I misinterpreted a previous comment you made about your skill level. Thank you for clearing it up.

My only purpose in bringing up the need to identify objects was to point out that computers need more testing compared to a human driver because there is more to teach it. It doesn’t just need to be programmed to drive, it must also be programmed to know what the world looks like, and the way to know that it knows what the world looks like is to test it.

This is becoming much like the objections over nuclear power. Sure, NP is safer than all other forms of power generation but it’s not perfectly safe so I reject it.

Sure, there may be some rare edge cases where a SDC will make the wrong decision. Still, over all it will make much better decisions than people. Ergo, SDC will be much safer.

OK, yes, the computer (or rather the programming) has to learn a wide range of things-in-the-world to be reacted to, including what things you may prefer to run over or hit rather than slam the brakes or swerve and if you have to swerve what is it better to hit (oncoming truck: bad; guardrail/Jersey wall: acceptable). The good part is that computers learn fast, the part that makes it a lot of work is they (so far) only act according to what has been taught to them. As mentioned earlier the big challenge will be that for years they’ll share the road with people-driven cars who are likely to be frequently rear-ending the robocars because the latter insist on quaint notions such as fully stopping for a right-on-red.

Although I can see myself wanting to have my personally owned vehicle be a selective-control one, where I can choose if I want robodriver or to take over myself for fun (as long as the system does not surrender to me w/o warning; if it encounters something incomprehensible I want it to pull over to the side of the road before telling me: OK ***you ***figure this out, monkeyman. I’ll probably be just as baffled anyway and just wait for the problem to resolve or tell it “ok let’s go back and plot a different route”) I can see myself welcoming the robocar at Avis/Dollar when travelling, so I know I can never run afoul of local traffic laws, speed traps and e-tolls.

WASHINGTON (AP) — The first U.S. self-driving car fatality took place in May when the driver of a Tesla S sports car using the vehicle’s “autopilot” automated driving system died in a collision with a truck in Florida, federal officials said Thursday.

Give me a sober driver any day.

What, are you saying the computer was drunk?

At least in CA, autopilot or human, that truck would be at fault. I nearly had something similar happen a few weeks ago with a tractor trailer making a left turn across a section of rural highway 41 between fresno and lemoore. I stopped about 10’ short of hitting him. Truck was making a left from southbound 41 to eastbound floral ave. I was northbound on 41.

These are scenarios where a human driven car can create a situation that a self driving car cannot easily compensate for by turning right in front of it. A self driving car reacts faster than a human, but a human can still do things that it cannot react to in time.

This is very sad but I’m really interested to know what happened here. From what I’ve read, the truck turned across the Tesla’s path. That is, the truck failed to yield to oncoming traffic. Depending on the time the car had to respond, it’s possible it did the best it could under the circumstances. It’s also not fair to call this a self-driving car. Tesla assumes that driver’s are always paying attention to prevent accidents like this.

I’m also wondering if the fact that it was a trailer contributed to the crash. I seem to recall that Tesla’s proximity sensors aren’t good a detecting things like trailers that are above the sensors. I recall hearing about the proximity sensors failing to detect a trailer when another Tesla was self-parking. That car drove under the trailer. In this case, I wonder if the lighting conditions meant the cameras couldn’t detect the trailer and the proximity sensors missed it at the last second because it was too high. It will be interesting to learn if, when and how hard the car braked after the truck turned in front of it.

This is just the first “ummmmm… Didn’t think of that…”

The trailer was white in bright sunlight.

The trailer was also an obstacle which existed only ABOVE the surface. The forward-looking sensors did not “see” it, according to what I’ve seen. The driver (that is still the human, folks) also did not see it.
The design, if it assumes that anything it hits will be in contact with the ground, would not have noticed a large box supported at the ends - which were something on the order of 50’ apart. IOW: a semi trailer.

In the early days of self-driving cars there were bound to be some fatalities*, but that doesn’t make it any less of a tragedy.

It seems my opinion on onboard data has already been vindicated. Tesla have already stated the reason for the crash and accepted culpability (if not legal liability…it was a “public beta”). And that’s because these systems collect so much data you basically have a very rich black box, where it should be easy most times to figure out what happened.
This should mean that systems will become much more robust very fast.

  • I am not of course saying that all testing should be trial-and-error in the field, with real human lives. You want exhaustive testing before you let such systems on the road, or in the care of a member of the public, but even with such precautions, there will be some situations no-one anticipated.

However the AI could be at least partially at fault if it had sat in the truck’s blind spot for some time.
I don’t linger long in a position where I can’t be seen by the driver, and I hope, and assume, the AI includes some awareness of that.

It sounds to me like the truck is at fault for crossing into the cars lane. Had this not been a car in self driving mode would the truck driver just have been cited? Is the truck driver being cited? What was the driver of the car doing? The driver was a former Navy Seal so you would assume that he had pretty good reflexes. This “accident” needs to be vetted a bit more before I would start blaming the technology alone and not the humans involved. Still I can understand Tesla trying to get out in front of this by accepting some responsibility because obviously the technology didn’t avoid the accident. Still doesn’t mean they are totally at fault.

The point is, regardless of how many programmers program every conceivable possibility into a program, there’s always going to be one that they missed. We need to send a self driving Tesla to Kim Jung Il. Let him test it for us.

I don’t think he’s up to the task, being dead and all. :wink:

No, we need to acknowledge that the world isn’t a perfect place and humans and machines make mistakes and sometimes pay for them with their lives. The human drivers in both vehicles bear quite a bit responsibility here along with Tesla. I’m not certain what your comments mean in the realm of a factual answer to a General Questions thread.