Self-driving cars

Actually if all the cars involved were driverless we’d have logs of exactly what was going on, so it would be far easier to figure out who was to blame than today. My scenario, by the way, assumes that only your car is driverless. AIs don’t run red lights (and I see several a day) and they don’t follow too closely.

Define reality. Based on the traffic reports I listen to religiously, there is a lot more reality in terms of crashes on the roads where Google cars drive than on twisty mountain roads.

The car should aim for a large crowd of children. They will act as a cushion, gently slowing the vehicle so that I remain uninjured.

In reality, self-driving cars will be so much better than human drivers that this kind of thing will be barely worth considering. The typical human driver will panic and only survive due to luck or excellent safety systems. An excellent driver will probably maintain control, but still only has a limited set of options. However, a robotic driver can individually brake all four wheels, which opens up a wide range of scenarios which were not otherwise survivable. Blowouts and similar problems are also far less likely to get out of hand if reacted to immediately, instead of the second or so it takes a human to do something.

Every car should be programmed to protect its occupants at all costs. Non-occupants should be considered, but as a secondary priority after the occupants are protected. This is how humans have always used their technology, and how human-created intelligences should be designed to act as well. A knife or gun designed to protect others at the expense of its owner would never be bought or used (or built) in the first place, and the same goes for cars as well. “Enlightened self interest” works for machines too.

Fair enough, I never had a tire blow out. I’d imagine that a relatively unskilled driver wouldn’t know how to handle a blow out.

But forget the tire blow out; let’s say a pedestrian runs in front of your car. Does your car mow the pedestrian or swerve for the Starbucks patrons on the right or the other way into oncoming traffic? I had a very similar situation today; two people crossed the road right in front of me without so much as glancing my way. Had I been going a bit faster they would not have been as fortunate.

Not addressing the calculus of who to hit,

But two things on mapping

  1. I personally believe that any self driving car will not only rely on mapping / GPS - there will also be advanced camera systems that will “see” the road much the same way that we do
  2. Even if the car did rely primarily / only on GPS and mapping - with the majority / most / all cars using the technology, how long do you think it would take to update a poorly mapped road to 100% accuracy? (i.e - I think any driverless car system will include real time server updates on road conditions, routes etc etc)

If I’m “driving” the car it should go into on coming traffic and try and save me. If you’re driving then it should go off the cliff to save the rest of us.

I don’t think people would buy a car if they knew there was a chance it would kill them to save others. I know id go back to riding my bike or a horse first

I would briefly like to point out that the main opposition against self driving cars (that I recall seeing in the senate?) was that they would be TOO safe.
Police officers get a major chunk of their income from tickets and fines. A safe and rational computer driven cars that you couldn’t possible falsely accuse for your quotas would be bad for cops.
I don’t see why self driven cars wouldn’t be able to brake and slowly veer off the road, or manage weather conditions and other poor drivers. Might not have the reaction time but would probably cause less accidents.
Self driven cars would probably be a lot safer in the long run tbh.

  1. I don’t see where the algorithm made the decision to kill one instead of two. How does the swerving car know how many people are in the other vehicles?
  2. As the article states, the algorithm was designed to avoid swerving into on-coming traffic. This is not the same thing as calculating the # of people who will possibly die under various scenarios, and then taking the lesser value.
  3. Unfortunately, the right side was a cliff, in which case…
  4. The algorithm may be designed to decrease speed prior to the blowout due to the lack of maneuverability in case of accident.
  5. The car would become unmaneuverable if the algorithm slams the brakes. But the algorithm would/should be designed to handle the contingency far better than a typical panicked driver, by steadily reducing speed by means of both braking and using the engine as a brake via downshifting.
  6. When a person slams on the brakes, many times they jerk the steering wheel one way or another, causing the car to swerve… and the article seems to assume that the algorithm will be jerking the wheel too, but that doesn’t necessarily follow that this is the case.

A citation about the likelihood of such a blowout:

So, if I’m running the numbers correctly, of the ~60k accidents on French roads, 131 were from front-tire blowouts that involved a second vehicle:

60,397 Accidents
*.065 (% of accidents that involved a blowout)
4047 (# of accidents that involved a blowout)
*.13 (inverse of “87% of (blow-out caused) accidents involved only one vehicle”)
526 (total accidents caused by blowouts that involved a second vehicle)
*.25 (blowout-related accidents were four-times more likely to involve the rear-wheel than the front-wheel.)

131 total accidents involving front-wheel blowouts that had damage to cars other than the “blow-out” car.

Even disregarding the rear/front wheel aspect of the problem, of the 60,000 accidents studied, only 526 of them were of the “blowout that involves a secondary vehicle” type mentioned in the article… or .8% of all accidents.

So people would rather kill themselves and/or others while driving? A machine (the car) can’t do the honors?

Even my bottom of the barrel car from 1999 with no upgrades whatsoever knows whether there’s a passenger in the passenger seat. Hence the annoying passenger seat-belt warning beeps.

So I don’t see it being much of a stretch for smart cars to report this information to other cars.

Actually, it doesn’t know if it’s a passenger or 40 pounds of groceries. My dog sets off the seatbelt chime too, and he just refuses to buckle up.

Yes they would, because everyone thinks they’re an above average driver and would never get in a car accident. In addition, self-driving cars that kill the occupants would be outcompeted in the marketplace by those that spare the occupants at all costs.

Driverless cars will get drunks and idiots off the road, thus improving safety a thousandfold. Anyone who is worried about a driverless car making a sub-optimal response due to a mechanical failure like a blowout is doing a VERY poor job of assessing risk.

Funny thing. Airliners have autopilots that will fly the aircraft from takeoff to landing. Yet the pilots don’t trust them to exercise the judgment that years of experience provides when things begin to go sideways. Probably the same with drivers. I’d rather trust my judgment and skills to avoid a reckless moron running from the cops than some computer program. Programming a response to every conceivable emergency is way too much of a challenge for it to ever be accepted. IMHO

Well I wasn’t just talking about mechanical failures. I am talking about “no-win” situations in general, in which something or someone gets damaged or injured. What is the AI supposed to do? The mechanical failure example was just a convenient example.

I’ll have to test this out with my car. I do know that the seatbelt sensor isn’t easily bypassed. You can’t just pull out the seatbelt or push down on the button for the warning to go away. Not sure about the passenger sensor. Interesting point.

In your entire life you have had to use your judgment and skills to avoid exactly HOW many reckless cop fleeing morons?

Humans suck at risk assessment, if we didn’t then everyone would stop at stop signs instead of running them endlessly in a futile attempt to get home sooner.

the only metric that is rational to judge these cars by is do then kill/injure/break fewer people/things than their human counterparts. If you get a 1% across the board safety increase then its a win. The reality is they will be safer for dozens of reasons than humans don’t even take into account while driving.

Nor is it a stretch to assume that the “blow-out car” will drive slower because of the fact that a cliff is on the right side of the road and heavy traffic on the left.

Will a “Baby on Board” RFID tag cause crashing automated cars to form a protective shell around me?

So… in a crisis, the robot cars are supposed to confer, at light speed, and decide how to kill the fewest humans?

Spock-like logic. Joker-like sensibility.