Ramifications of Self-Driving Cars?

This is my big issue–(which I mentioned earlier in this thread,using the examlple of parking on an unmarked dirt field at a rural music festival.)

Google proudly publishes its safety record, with zero incidents. But they don’t publish the number of incidents where the car failed to perform its most basic job: taking you where you want to go.

I have read that the car interprets a puddle of water as a solid object, and therefore stops in the middle of the road, thinking the road is blocked. Similarly with a narrow strip of mud or snow a few inches high–the car cannot yet understand that it is okay to drive right over it. These problems are solvable, of course…but for the first few years on the road, it’s going to require installing a LOT of updates and “service packs”.
Just think of how much of a hassle it is to change from Windows 7 to 8 to 10, etc. It slows you down, but at least you still get your work done. But when one rainy morning when your car completely fails to take you to work at all, you’re going to be very unhappy.

Imagine sitting in your stranded car, frantically calling the help line, and hearing “your call is important to us” repeated for 45 minutes, while a thousand other cars on the highway honk at you angrily.

It’s going to be a long, long road, and take many decades.
With millions of bug fixes along the way.

Yes. My point is that testing of software is important. You seem to agree.

That misses the point. Chronos wanted to know why we shouldn’t be happy with a car that can just pass a DMV test seeing as we are happy with people who can pass a DMV test.

The answer is that computers and people are different. A computer could be programmed specifically to pass a set number of driving tasks but not be able to do any other driving at all. A person is not like this. We know that if a person can do certain tasks then they can probably do certain other related tasks. We don’t know this about computers, all we know is that they can do the tasks we’ve programmed and tested them for. Therefore a basic driving test would not be adequate because we would not be sure that the computer has the skills necessary to drive in other conditions.

Now of course, part of programming a computer involves testing it. The Google cars are being tested now. The testing is already far more rigorous than a DMV test. It would be idiotic to program a self-driving car and then, without doing any testing at all, put it through a basic DMV test and then declare it safe for the public.

But a person is like that: A person could be trained specifically to perform the tasks in a driving test, too. And I suspect many are. They could also be trained for more general tasks, but then, the same is also true for programming a computer.

It would be just as idiotic to train a human driver, and then, without any testing at all, put them through a DMV test and declare them safe.

But of course, anything for which the test is known is anything for which you can train to the test; I did that myself when I needed to get a Florida license (what? I could drive, and in fact had driven in Florida for three years without damaging anything or anybody, but I didn’t know what a 3-point turn was). Ideally, training will also involve the other stuff outside the test.

I’d have to know a lot more about this incident. A “puddle of water” could be “many inches of standing water” which is quite dangerous to drive through and could disguise a deep hole, or could be a light sheen of water on the street.

There were a couple early bug stories i heard along similar lines that were related to how the laser rangefinders/cameras were dealing with highly reflective surfaces and in some cases where cameras were able to see reflections of traffic signs. This has since been compensated for.

Once insurance companies are confident that SDC’s are at least average they will offer policies that are much cheaper. I can imagine that bank loans will be cheaper as well (since it’s less likely that one will total the car). If one has kids then they’d rather have an SDC drive them around then let them drive themselves.

Cheaper AND safer. It won’t take long at all once they are offered.

Self-driving cars must pass crash tests also. You wrote as if software testing for these cars would be some kind of new thing. Perhaps I misunderstood.

As the VW issue shows, studying for the test is always an issue. The high accident rate for young drivers might indicate that the DMV test is as inadequate for humans as it would be for a computer.
I read Chronos as saying that current regulations are inadequate for humans also. DMV tests are given in a variety of orders in a variety of locations and situations - being able to pass all of them (and say a few others like merging onto a highway) might be good for permitting cars to drive - but not good enough for the developers of the car, who are catching design defects, not flaws.
If you are buying a $100K piece of software you do an acceptance test on it. This test is nowhere near as thorough as developers do before it gets released. (I’ve been on both sides of this.) So, while a DMV test (or something like it) might be good enough to let a new car on the road, no car maker is going to give the car a test and call it done.
I suspect DMVs would love to give more extensive road tests to new drivers, but cost issues forbid it. If a high school had a drivers ed class, and testing one graduate of that class showed that they all were qualified to drive, the test could be more rigorous. That is the situation with cars. Actually each new car, unlike each new driver, won’t be DMV tested at all, I bet.

Whether the current tests are adequate or not is certainly debatable, but I will not strongly take either side of that debate. I’m just saying that, whatever the standard is, it should apply equally to chromeheads and to meatbags. If the current tests aren’t strict enough for computers, then they’re not strict enough for humans, either. If they are good enough for humans, then they’re also good enough for computers.

No because humans are already programmed for a heap of things that computers aren’t. A human knows what a tree, bus, car, dirt road, child, building, forest, etc looks like. A human has a model of the world in their head. A human can take a previous experience and successfully apply it to a new situation. A computer has none of this unless it has been specifically programmed for it. We can take an average 18 year old person and make a range of accurate assumptions about how they will behave. A computer has to have all of this knowledge programmed and then tested.

Teach a human to start, stop, reverse, turn left, and turn right and they have the basics to be able to drive in the world. Teach a computer these things and nothing else and it is still useless. we need to teach it what a dog looks like and how to react, what water looks like and how to react, etc etc. I could go on obviously.

So a computer that is going to drive a car needs to be tested far more thoroughly than a person who is going to drive a car, because not only do you need to test its ability to control the car you need to test its ability to interpret the world.

Obviously this kind of testing is not done once the completed car is presented to the public, it is being done now, in the design phase, but it still needs to be done and to pretend that if a DMV test is adequate for a human it is adequate for a computer is just plain laughable. A human who can successfully pass the DMV test could be expected to drive on a wet dirt road through a forest. A computer may not have been programmed to know what a dirt road is and may not recognise it as something that can be driven on at all.

Actually it is a bit simpler than that.

There are objects. Based on their position and how fast they are moving, the objects in question can be avoided or they can be allowed to interrupt movement. A pedestrian steps off a curb 150’ ahead, the car stops and allows it to pass. It does not need to know if its a dog, a little kid pulling a wagon, or an adult, it is an obstacle in its path with a course and speed and rule #1 is not to hit things.

The cars are not just using cameras to identify things in their path. it s a combination of laser range finding, radar, and cameras.

I understand that there can be shortcuts taken in the way the world is “modelled”, however I thought Google were having to come to grips with moral issues such as a pedestrian stepping out in front of the car. The car can avoid the pedestrian and hit something else or it can hit the pedestrian.

Also I would hope its behaviour, when faced with a person walking out in front of it, would be different to if a kangaroo hopped out in front of it or a tree branch fell across the road.

http://cacm.acm.org/magazines/2015/8/189836-the-moral-challenges-of-driverless-cars/fulltext

Cars and driving are designed for the capabilities we have acquired through evolution - so there is no need for a test of that - with the exception of the vision test.
On the other hand there is wide variation among potential drivers, and so you need to screen out those outliers who have never quite gotten driving, or who haven’t really learned how.
Computers and software are designed from scratch and have to fit into the pre-existing driving environment, so they need extensive verification tests. On the other hand there should be little variance between instances of cars, so there is no need to test each one, assuming there are tests for defective manufacturing.
As an example, both an operating system and a microprocessor need to have the design thoroughly tested to make sure they meet the specs. Because there is manufacturing variance for processors, each one also needs to be tested for defects. Because reproducing software is almost defect free (and a checksum can detect pretty much all defects) you don’t need to test each CD or download of software for errors.

If the car reacts the same for a kangaroo as it does a human that’s a good thing. It doesn’t mean the human is getting less attention, it’s raising the bar for the kangaroo and protecting the car and it’s passengers better. The whole level of safety goes up.

In all three cases, the correct response is the same, because a collision with any of those objects will endanger human life. And really, do you think that a human is capable of distinguishing between a human and a kangaroo in the fraction of a second in which a decision needs to be made?

And even if it the software gets it wrong every single time the situation is so rare that it will still be vastly safer to use a SDC. It’s like fretting over drowning because you’re wearing a seat belt.

We have had a few “hypothetical” threads on the moral and legal issues of avoiding hitting a little kid but endangering yourself and others in the process. The cars already react more quickly to an impending collision than a human being does and will be more likely to stop as opposed to “steering out of trouble” which comes with other challenges.

From a liability standpoint is a little kid bolts out into traffic and gets pasted by a self driving car, legally the manufacturer (or a human driver) is better off hitting the kid rather than clobbering a couple parked cars or veering into another lane or potentially oncoming traffic. If you or a SDC do cause a collision injuring sopmeone and mom scoops up kid and runs off, everyone will be trying to blame you or the cars autopilot system rather than the kid and or his parents. Hopefully the cameras and sensor logs could prove that the evasive action was warranted, but the headlines will be “Insane Self driving car veers into oncoming traffic killing a family of 4” until 2 years later when it goes to court and the driver and SDC were found clearly trying to do the right thing.

No. No it is not. Driving into a tree branch will not endanger your life. Hitting a koala will not endanger your life. Hitting a kangaroo will be very unlikely to endanger your life. Hitting a person will certainly endanger their life.

Absolutely. I’ve come across wildlife frequently on Australian roads and have always been able to see what it is (kangaroo, koala, rabbit, tree branch, etc). At 110 kph the stopping distance is far greater than the time needed to recognise what the object is.) And often you see it on the side of the road before it jumps out in front of you so you already know what it is.

Serious question for you Chronos. Do you drive a car?

If we’re seeing it well in advance, then why aren’t both drivers (human and computer) stopping for it? The only situation where the question even becomes relevant is where we don’t see it until it jumps right out in front of us and there’s no time to stop safely.

Because, as I said, the stopping distance from highway speeds is longer than the time taken to see and react. You can see, identify, and react to an object but still have no choice other than to hit it or swerve to avoid. I would not swerve to avoid a small tree branch or a rabbit, I would consider swerving for a larger animal such as a kangaroo but only if I thought it was safe to do so and I wouldn’t take the car off the road, I would do everything I could to avoid hitting a person including taking the car off road and/or into a side barrier, but not into oncoming traffic. All of these different options rely on identifying objects in the first place, so the computer needs to have a model of the world, and that model must be tested to make sure it is accurate enough and gives appropriate outcomes. Humans already have a model of the world and don’t need related testing.

As I also said, you will often see something on the side of the road, recognise it as a person, dog, tumble weed, kangaroo, rabbit etc, and not have to stop for it until it unexpectedly moves on to the road. It would be impractical to stop the car every time you see a person on the footpath.

I see you said earlier that you drive, so I can only assume that you drive in a fairly limited environment. House in the suburbs to the city or something like that? You obviously don’t drive rural roads where potential obstacles are numerous.