Ramifications of Self-Driving Cars?

Or give incentives. Or convince people that having the car drive is more convenient for them.

I suppose public transport will never catch on either, because people need to keep stuff in their cars?

In much of the US, public transport is only for people who can’t afford to own a car.

I don’t think “I need to keep stuff in my car” is the primary reason for this. I drove to work today myself, and I have nothing in the car that I’d need or want to use today. Everything I need today is in my backpack which I’ve brought into my office.

The Ford Focus is supposed to be able to park by itself; it doesn’t just beep beep BEEEP at you but move itself. I don’t know what tolerances do they need, but that’s definitely in my “want” list for my next car (if I ever get a next car and the next one isn’t a shared service).

People often want to keep stuff in their cars. Not necessarily always. If you’re just making a trip to the grocery store, then it doesn’t matter: You go in an empty vehicle, you buy your groceries, you bring your groceries out to an empty vehicle, you drive home, you empty it. But if you’re running errands all day, stopping at three or four stores, and you don’t want to carry your groceries through the hardware store or your hardware through the art-supplies store or your art supplies through the pet store, then you need someplace to put your stuff in between.

Then keep the car for the whole day on those errand days. I’m sure any car-share service would allow this for a minimal fee.

Sorry, I misunderstood your comment. I thought you were saying that it’s obvious people don’t want to leave things in their cars because otherwise public transportation wouldn’t be successful. But I think people do want to leave things in their cars, it’s just that public transportation makes more sense for other reasons.

I use public transportation but I would buy a self-driving car. I think it’s unlikely that I would use a ride share. I would want my own car with my own options/settings, my own stuff, my own comfort, and my own ability to leave non-perishable groceries in the garage until later because my hands are full, dammit.

If you’re used to a certain bundle of features in a product, you might be worried about switching to a product with a different bundle of features. You’re used to the features you have, and it’s nice to know you could use those features, even if you never actually do use them.

So it’s perfectly reasonable that people who are used to the current system of human driven privately owned cars will be much more comfortable sticking with the status quo, even if the reasons they say they want a private car don’t actually make that much sense.

But buying a car is a pretty significant financial burden when you’re a young person. Even getting a crappy used car has significant costs. So as a teenager or college student are you going to invest thousands of dollars on a vehicle that you use very infrequently? You’re probably going to get used to hiring a car when you need it, even if the cost is slightly higher, because you don’t have to pay the full price up front, and don’t have unpredictable maintenance costs.

And so you get used to the idea that you can’t treat your car as a mobile rolling storage locker, like lots of people do today. And even when you’re older and could afford the up-front costs of private ownership of a car, features like “I can stash my stuff there” don’t seem very compelling because you’ve never used your car that way.

Same thing with driver’s licenses. Lots of teenagers have a very hard time learning to drive. But in today’s world, having a driver’s license is pretty much a necessity. If you can’t drive you pay a steep price in convenience. But putting off that driver’s test until you’re older and actually own a car makes a lot more sense if there are self-driving cars around. And by the time you’re old enough to own a car, and mature enough to get a driver’s license, you’re so used to the idea of ride-sharing that the features of a privately owned car don’t seem compelling to you.

I agree with everything else you said. But I think people are just as likely to dream about owning a car, just as some people who live in apartments love it and some dream of owning a house. The dream may not be about reality, but image.

Way back in post #157, Chronos said:

Seemingly, from the idea that there’s no such thing as a driverless car because the car is the driver, Chronos reached this conclusion in post #159:

Chronos seems to believe that the rules of the road today are that self-driving cars are: (1) drivers just like people and (2) that they should be approved today according to the same standards that we license human drivers.

Did I misunderstand him? I don’t think so. Chronos later said:

Chronos later went on to say that we should give self-driving cars the same types of “reasonable accommodation” we might give to a disabled person. Chronos literally wants to treat driverless cars exactly like we treat people (disabled people specifically).

I think this idea is, to be blunt, inane. Others generally seem to agree. It would be trivially easy to program a car that could drive a defined loop while adhering to the speed limit, stop at a single stop sign, and demonstrate the apparent skill to do a three-point turn, even if it had no ability to safely drive in the real world.

I pointed out, in response to Chronos’s first two comments, though in a way that someone characterized as “pedantic” that self-driving cars can’t pass today’s licensing standards. They can’t pass them because today’s standards were written for people, not robots. This means that we have to rewrite the standards to accommodate self-driving cars. Once we accept that we have to rewrite the standards, the question becomes, what should those standards say?

I pointed out that driving tests don’t actually give us much confidence that licensed drivers are safe. I noted that teen drivers are very likely to get in a collision in their first year of driving. I’m not looking for citations, but I’m confident that the overwhelming majority of people killed in car accidents are killed by people who at one time or another passed a driver’s license test, even if that license was later suspended. Driver’s license tests don’t do a good job of screening out dangerous drivers. We basically give almost anyone a driver’s license, then we watch their driving record to see just how dangerous they are before we decide as a society whether they can keep it. Chronos thinks this is a fine model for approving self-driving cars. I disagree.

If we want to authorize driverless cars on the highways, we will have to tailor standards to them, not use the piss-poor standards we apply to people. As a society, we can accept relatively poor standards for licensing drivers because there is a limit to how much danger an individual driver can cause before they reveal themselves as too dangerous to be allowed to continue driving. Drivers can only drive drunk, hit pedestrians, cause car accidents, or accrue speeding tickets so many times before states cancel their licenses. There is no limit to the number of self-driving cars that can use a particular algorithm, thus, there is no limit to number of injuries that a broken algorithm could potentially cause.

So I said that before we allow an algorithm to pilot a million cars, perhaps we should be roughly a million times more confident that it works.

[QUOTE=Richard Pearse]
Not the car, the algorithm controlling the car. You might have the same piece of software responsible for every car in the US, if it had a serious unknown problem it could possibly cause a lot of deaths. Which just means the software has to be very rigorously tested (not by doing a DMV test obviously.)
[/QUOTE]

Yes, like this.

[QUOTE=drachilix]
It doesn’t need to be a million times better than a person, if its 10% better than the average person thousands of lives and tens of thousands serious injuries a year will be avoided.
[/QUOTE]

I did not define what it means for a self-driving car to “work,” and I didn’t intend to suggest that self-driving cars must be a million times safer. I’d say that a self-driving algorithm works if it is safer than the average driver who hasn’t lost his license and isn’t at risk of losing his license. That’s probably significantly better than the mean licensed driver because it carves out the bottom group of the most dangerous drivers who don’t belong on the roads now.

[QUOTE=Mijin]
And in terms of testing I am sure that a given self-driving program will be tested millions of times longer than a human driver (bear in mind the program can be tested in many cars as the same time). With each run gathering vastly more information being than a driving instructor could record. And systematically testing every permutation of every scenario we can think of.
[/QUOTE]

You are right. My driving test was about 3 miles long. Google has test driven its autonomous cars about 1.5 million miles as of now. (Waymo - Self-Driving Cars - Autonomous Vehicles - Ride-Hail). With this type of testing, we can in fact develop a million times more confidence that Google’s self-driving car is at least as safe as an average driver compared with the level of confidence my DMV examiner had that I was good enough to get my license.

[QUOTE=Richard Pearse]
People are not going to accept a computer driver that is slightly better than the average person, it must be better than the best person.

If a self-driving car is 10% better than the average person then all of the people who are >10% better than the average person will be better off driving themselves.

If you want mass acceptance of this, then it has to better than a person and it has to be a lot better. Most people don’t think about risk analytically.

*Or whatever amount, significantly more confident than we are in people.
[/QUOTE]

I agree with Richard Pearse that people won’t accept a car that they don’t perceive as being a lot better than they are as drivers. Since people have an inflated view of how good they are and because people will be overly sensitive to any self-driving car’s faults. self-driving cars will probably have to be orders of magnitude better than people before they get consumer acceptance.

Insurance companies will handle that with higher rates for manually driven vehicles or vehicles that were being manually controlled at the time of an accident. Lots of people talk alot of shit until it costs them more money. Suddenly they see the light when they are paying twice what everyone else does for car insurance.

Orders of magnitude?

Just for a little insight, google publishes incident reports on all accidents involving the prototype cars, the vast majority of these incidents were prototypes being rear ended by human drivers or human drivers trying to squeeze by in tight traffic situations.

Right now Google cars are involved in accidents about as often as the US average.

The difference, zero at fault accidents out of 13, every single one was a human fuckup from a human controlled car.

Incident reports
http://www.google.com/selfdrivingcar/reports/

A brief video showing how the cars systems perceive other vehicles, bicycles, and pedestrians.

https://www.google.com/selfdrivingcar/how/

I do not think that self-driving cars are subject to the same regulations that human drivers are. I think that they should be. If those regulations are loose enough that they’ll let dangerous self-driving cars on the road, then they’re also dangerous enough to let dangerous humans on the road, and that’s already a problem independent of the existence of self-driving cars. But we as a society have already decided that any driver that can pass the DMV test is safe enough, and having already made that decision, why shouldn’t we extend it to self-driving cars?

But I’ll bet a vast majority of lethal accidents are caused by the worst drivers (say the 25% worst) and drunk drivers. If you can create a SDC that’s just average you eliminate most accidents.

Especially since a self-driving car will never fall asleep or get distracted by cell phones.

Sure, but you need to sell it to people. And it has to be sold to you, Deeg, who probably thinks he is a reasonably safe driver. Each individual person has to believe it’s a good idea for themselves to buy it.

Personally I would buy a car with self-driving capabilities that I could also drive myself as long as it was a similar price to a car without self-drive capabilities. When I last bought a car I could have had lane following and self-parking options but I didn’t get them because they were a few thousand dollars more expensive.

I would not buy a car that had no manual controls for a couple of reasons.

  1. I’m not a car nut, but I like driving. If I’m going to sit in a car for 30-45 minutes I’d rather be driving it than not driving it. I don’t like checking facebook or doing anything else in a car because it tends to make me feel a bit sick if I do it for too long and I like to watch what’s happening around me. I own a stick shift rather than an automatic because the act of driving is more important to me than the convenience of not having to change gears.

  2. If ever the car encounters a situation where it doesn’t know what to do, I don’t want to be left stranded.

Now if there are no cars with manual controls then I wouldn’t have a choice, but I don’t know how you move from the current situation to a world where autonomous cars are all you can get.

Because people aren’t computers and computers aren’t people. A computer can be programmed to do a very specific set of tasks and nothing else. A person who can do a representative set of tasks can be assumed to be able to do other similar tasks, but you can’t make that assumption with a computer.

I’m not sure where you got your total of 13 accidents but at least one was the Google car’s fault. Still, one minor at fault collision in 1.5 million miles is a lot better than the average person. It’s about an order of magnitude better by my reckoning.

http://www.google.com/selfdrivingcar/files/reports/report-0216.pdf

I agree with you that lower insurance rates will increase consumer acceptance of self-driving cars.

You are aware, I hope, that most modern cars have tons of software in them that do the tasks that used to be done by mechanical components? The firmware in my Prius had a bug that shook the steering when it went over a certain type of rough pavement.
In my experience with both software and hardware bugs, the ones that escape to the field are active only in very particular circumstances. They are not going to cause a lot of deaths, probably way fewer than the air bag ones. Car companies already know a lot about reliability, since their electronic components have to work in the not very friendly environment under the hood.
I’d worry a lot more about hacking, but that is going to happen with increasingly smart standard cars also.
Car companies do testing better than most.

Computers and programming have moved on a bit since the days of Desk Set with Tracy and Hepburn. Just for one example, just look at the response of Google Maps to traffic jams, even sudden ones. In my experience it does significantly better than people except maybe people who live right in the area.