Reply
 
Thread Tools Display Modes
  #51  
Old 11-22-2017, 01:46 PM
wolfman wolfman is online now
Guest
 
Join Date: Mar 2000
Posts: 10,347
Quote:
Originally Posted by DrCube View Post
If this isn't something human drivers ever have to do, why do we expect robotic cars -- which are supposed to be many orders of magnitude safer than us skittish meatbags -- to encounter this problem with enough regularity for it to become a major ethical issue for its programmers?
Because that's what programming is. You have to give it a set of instructions of how to deal with every case of input data possible, before it hits the road. And every case you don't think to put it will still result in something happening as well. It's an ethical problem because it can happen, and you have to make a decision for it ahead of time rather than kicking it off like the Trolley problem and saying "It will never happen to me so I don't have to make a decision".
  #52  
Old 11-22-2017, 01:52 PM
Snarky_Kong Snarky_Kong is online now
Guest
 
Join Date: Oct 2004
Posts: 7,286
It doesn't matter what the car does if it has to choose between killing the occupant and killing those outside of it. It could kill both and self driving cars would still be vastly better than the alternative.

The true "trolley problem" wrt self driving cars isn't "passenger vs people in crosswalk" it's "30,000 traffic deaths a year due to human error" vs "much much less than that because computers are better at driving."
  #53  
Old 11-22-2017, 02:01 PM
griffin1977 griffin1977 is offline
Guest
 
Join Date: Feb 2006
Posts: 2,804
Quote:
Originally Posted by DrCube View Post
Has anyone actually encountered the trolley problem in real life? With real trolleys or a car or some other vehicle? I just don't think this is a scenario that comes up. Sure, metaphorically, but that's not what I'm talking about. I'm talking about literally steering a large object between two definitely fatal paths, the only difference between which are the number of expected fatalities.

If this isn't something human drivers ever have to do, why do we expect robotic cars -- which are supposed to be many orders of magnitude safer than us skittish meatbags -- to encounter this problem with enough regularity for it to become a major ethical issue for its programmers?
The point is that humans hardly ever encounter the trolley problem, or at least they do encounter the trolley problem but are required to react so quickly there is no way in hell they are going to consider the ethical implications of what they are doing. I am sure there a plenty of people out there who lived to regret the fact their "reptile brain" said "swerve" and as a result they caused other people's deaths. Many of those people would probably have taken a different path if they had the luxury of carefully considering the implication of what they are doing. The "trolley problem" WAS a completely hypothetical thought experiment, not a real problem for human beings.

But for a computer that fraction of a second is plenty of time to run an involved analysis of what they should do. And the people who write that algorithm have all the time in the world to consider the ethical implications of it.
  #54  
Old 11-22-2017, 02:26 PM
Ravenman Ravenman is offline
Charter Member
 
Join Date: Jan 2003
Location: Washington, DC
Posts: 21,990
Have we established what humans should do in the scenario?

And if we have established that, have we also established that if humans do the wrong thing under the prescribed circumstances, that they would face substantial penalties for their poor choice; and had they chosen the right thing, that they would be free from penalties?
  #55  
Old 11-22-2017, 02:49 PM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by Ravenman View Post
Have we established what humans should do in the scenario?

And if we have established that, have we also established that if humans do the wrong thing under the prescribed circumstances, that they would face substantial penalties for their poor choice; and had they chosen the right thing, that they would be free from penalties?
Just because you can come up with an algorithm that does "whatever a human typically does" doesn't make it optimal. An optimal solution is the one that does the least damage, and if it must do damage, it prioritizes. Smoothly and cleanly. It also is aware of the uncertainty from it's own sensors and vision systems and doesn't take unnecessary risks. But, on the other hand, does take risks when the math is in favor of them. (it's better to floor it into cross traffic if the risk is smaller than the risks of getting hit by an 18 wheeler approaching from the rear with failed brakes)
  #56  
Old 11-22-2017, 02:58 PM
Snarky_Kong Snarky_Kong is online now
Guest
 
Join Date: Oct 2004
Posts: 7,286
Quote:
Originally Posted by SamuelA View Post
Just because you can come up with an algorithm that does "whatever a human typically does" doesn't make it optimal. An optimal solution is the one that does the least damage, and if it must do damage, it prioritizes. Smoothly and cleanly. It also is aware of the uncertainty from it's own sensors and vision systems and doesn't take unnecessary risks. But, on the other hand, does take risks when the math is in favor of them. (it's better to floor it into cross traffic if the risk is smaller than the risks of getting hit by an 18 wheeler approaching from the rear with failed brakes)
Yes, it should do what causes the least damage. In this case the thing that causes the least damage is whatever gets self driving cars adopted the fastest regardless of what it actually does in a trolley problem scenario. If people won't buy them if the car is programmed to kill the driver in that case, then the cars should be programmed to plow through whatever crowd is in their way. (1 crowd)*(# of trolley problems per years) << (# of people that die in accidents due to human error per year).

The meta problem is the important problem.
  #57  
Old 11-22-2017, 03:00 PM
griffin1977 griffin1977 is offline
Guest
 
Join Date: Feb 2006
Posts: 2,804
Quote:
Originally Posted by Ravenman View Post
Have we established what humans should do in the scenario?

And if we have established that, have we also established that if humans do the wrong thing under the prescribed circumstances, that they would face substantial penalties for their poor choice; and had they chosen the right thing, that they would be free from penalties?
This is actually the bit of this that I do actually find a bit worrying. I don't actually think developers are sitting round stroking their beards discussing normative ethical theory. I DO think that lawyers are asking this exact question with respect to both civil and criminal penalties. And what they consider the best way to reduce "harm", based on that answer, has probably has very little to do with ethics as most people know it. Should it favour hitting a girl scout over a rich father? As the resulting damages will be less.
  #58  
Old 11-22-2017, 03:02 PM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Basically, instead of thinking of the problem like a series of edge cases, and then insisting that some edge cases are so unlikely that they will never happen in the next 30 years with hundreds of millions of autonomous vehicles on the road, just look at the algorithm.

You want to get the outcome of :

minimize damage to the occupant of the vehicle
minimize damage to other people
minimize damage to the vehicle
minimize damage to animals outside the vehicle
minimize damage to objects outside the vehicle
minimize violations of the law

Obviously, though, these objectives conflict. For instance, crossing a double white line even briefly to avoid hitting road debris that might puncture a tire is a violation of the law. But if there is no traffic in the other lane for a long sight distance, this is probably a good path to take. If you're really good, though, and have very precise modeling of where each tire of your vehicle falls and the distances between your undercarriage and the ground, you might be able to plot a course right through the road debris where none of your tires actually roll over anything that looks sharp. So that's an even better path.

So one solution to these conflicting objectives is just hierarchical minimization. Find the path that has the best outcome for the first objective in the hierarchy. Consider it and paths that are only slightly worse than the best path for each objective down the list. That makes for a nice, clean algorithm that is likely to actually work. And the same code that decided to cross a double white line to avoid some debris when it was clear (but didn't when it wasn't clear of traffic) is the one that decides to bumper bump the skinnier girl scout who is farther away in the middle of the road, if that ever happens.

Last edited by SamuelA; 11-22-2017 at 03:03 PM.
  #59  
Old 11-22-2017, 03:05 PM
griffin1977 griffin1977 is offline
Guest
 
Join Date: Feb 2006
Posts: 2,804
Quote:
Originally Posted by SamuelA View Post

So one solution to these conflicting objectives is just hierarchical minimization. Find the path that has the best outcome for the first objective in the hierarchy. Consider it and paths that are only slightly worse than the best path for each objective down the list. That makes for a nice, clean algorithm that is likely to actually work.
How do you define "work"? This is an ethical problem. Saying "always favor the people in the car to those outside it" is an ethical decision. It is not necessarily a unethical, bad or evil one, but it is an ethical decision. And one the car companies will have to defend in court (and the court of public opinion) the first time it results in deaths.
  #60  
Old 11-22-2017, 03:07 PM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by griffin1977 View Post
This is actually the bit of this that I do actually find a bit worrying. I don't actually think developers are sitting round stroking their beards discussing normative ethical theory. I DO think that lawyers are asking this exact question with respect to both civil and criminal penalties. And what they consider the best way to reduce "harm", based on that answer, has probably has very little to do with ethics as most people know it. Should it favour hitting a girl scout over a rich father? As the resulting damages will be less.
There's another consideration : brand value. This is why I think "the driver always comes first" is optimal. It devalues your brand as a whole and drives consumers to competing brands if consumers know the driver doesn't always come first. This dwarfs the exact cost from any single lawsuit.

And the best we can do with present computer vision tech is differentiate between "probably a human" and "probably an obstacle". We cannot currently assess someone's approximate net worth or age at the present time, from the camera of a moving car that has finite mobile computing power and only a fraction of a second to decide.

This is about what we have to actually work with : https://www.youtube.com/watch?v=4eIBisqx9_g

Note that this particular object detector/classifier is state of the art and is more advanced that the solution Waymo is likely using at the moment. It also happens to run fast enough to use in an autonomous car, processing hundreds of image frames per second with GPU hardware.

Last edited by SamuelA; 11-22-2017 at 03:08 PM.
  #61  
Old 11-22-2017, 03:11 PM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by griffin1977 View Post
How do you define "work"? This is an ethical problem. Saying "always favor the people in the car to those outside it" is an ethical decision. It is not necessarily a unethical, bad or evil one, but it is an ethical decision. And one the car companies will have to defend in court (and the court of public opinion) the first time it results in deaths.
By "work", I mean in a technical sense. As in, this algorithm is simple and smooth enough that the car following it is unlikely to do something really stupid or unexpected as a consequence.
  #62  
Old 11-22-2017, 03:13 PM
griffin1977 griffin1977 is offline
Guest
 
Join Date: Feb 2006
Posts: 2,804
Quote:
Originally Posted by SamuelA View Post
There's another consideration : brand value. This is why I think "the driver always comes first" is optimal. It devalues your brand as a whole and drives consumers to competing brands if consumers know the driver doesn't always come first. This dwarfs the exact cost from any single lawsuit.

And the best we can do with present computer vision tech is differentiate between "probably a human" and "probably an obstacle". We cannot currently assess someone's approximate net worth or age at the present time, from the camera of a moving car that has finite mobile computing power and only a fraction of a second to decide.

This is about what we have to actually work with : https://www.youtube.com/watch?v=4eIBisqx9_g

Note that this particular object detector/classifier is state of the art and is more advanced that the solution Waymo is likely using at the moment. It also happens to run fast enough to use in an autonomous car, processing hundreds of image frames per second with GPU hardware.
True but GPUs are advancing fast (and remember real autonomous cars don't actually exist yet). I am sure the lawyers are analyzing hypothetical computer vision technology in X years time when autonomous cars hit the roads, not the current technology.
  #63  
Old 11-22-2017, 03:47 PM
Shodan Shodan is offline
Charter Member
 
Join Date: Jul 2000
Location: Milky Way Galaxy
Posts: 35,164
Quote:
Originally Posted by SamuelA View Post
... this algorithm is simple and smooth enough that the car following it is unlikely to do something really stupid or unexpected as a consequence.
Said every programmer every time he put something into production.

Regards,
Shodan
  #64  
Old 11-22-2017, 04:05 PM
griffin1977 griffin1977 is offline
Guest
 
Join Date: Feb 2006
Posts: 2,804
Quote:
Originally Posted by SamuelA View Post
For instance, crossing a double white line even briefly to avoid hitting road debris that might puncture a tire is a violation of the law.
This is its own legal conundrum unrelated to trolley problems. You are programming a car to deliberately break the law. Does that open you up legal liability either directly (is it illegal to conspire to break traffic laws in any jurisdiction ?) or indirectly (can you get sued when your customer gets a ticket for crossing that double white line?).
  #65  
Old 11-22-2017, 04:08 PM
Ravenman Ravenman is offline
Charter Member
 
Join Date: Jan 2003
Location: Washington, DC
Posts: 21,990
Quote:
Originally Posted by SamuelA View Post
Just because you can come up with an algorithm that does "whatever a human typically does" doesn't make it optimal. An optimal solution is the one that does the least damage, and if it must do damage, it prioritizes. Smoothly and cleanly. It also is aware of the uncertainty from it's own sensors and vision systems and doesn't take unnecessary risks. But, on the other hand, does take risks when the math is in favor of them. (it's better to floor it into cross traffic if the risk is smaller than the risks of getting hit by an 18 wheeler approaching from the rear with failed brakes)
You keep turning the question to one of an algorithm. But that doesn’t answer he question: what should humans do?

Quote:
Originally Posted by griffin1977 View Post
This is actually the bit of this that I do actually find a bit worrying. I don't actually think developers are sitting round stroking their beards discussing normative ethical theory. I DO think that lawyers are asking this exact question with respect to both civil and criminal penalties. And what they consider the best way to reduce "harm", based on that answer, has probably has very little to do with ethics as most people know it. Should it favour hitting a girl scout over a rich father? As the resulting damages will be less.
Well, our system of torts isn’t exactly based on increasing guilt on those who do wrong; it’s based on penalizing wrongdoers’ wallets. And the ideal is that more serious wrongs cost more, which is essentially a judgment made by judges and juries. If anything, the systems of civil claims probably provides a better starting point for this debate, at least as compared to navel-gazing and fretting over whether it is worse if a little girl is killed by a drunk driver or by one of Elon Musk’s corporate software department.
  #66  
Old 11-22-2017, 04:38 PM
gatorslap gatorslap is offline
Guest
 
Join Date: Jun 2011
Location: Oakland, CA
Posts: 753
You all seem rather confident in the ability of the programmers of these machines to create self-driving cars that will accurately execute their intended algorithm 100% of the time. Any major piece of programming contains bugs.

Less than a year ago, Uber conducted a short-lived test of self-driving cars in San Francisco. Their cars were seen taking illegal "hook" right turns across bike lanes. Presumably, this particular issue has since been corrected. But in my view this should be seen as illustrative, not an anomaly. If they can't get something this basic right out of the gate (it's in the state's traffic code), I'm not so sure they'll get the more nuanced issues smoothed out with a bit more testing.

What happens when the CPU overheats? Or there's a memory leak? Or a sensor malfunctions? Or the program misinterprets an ambiguous sensor reading? Or it just plain glitches out and acts in an unexpected way?

What should happen if someone dies due to a self-driving car's software bug?
__________________
The mind is a terrible thing to taste.

Last edited by gatorslap; 11-22-2017 at 04:43 PM.
  #67  
Old 11-22-2017, 04:52 PM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by gatorslap View Post
You all seem rather confident in the ability of the programmers of these machines to create self-driving cars that will accurately execute their intended algorithm 100% of the time.

What should happen if someone dies due to a self-driving car's software bug?
Well, yeah. Not 100% but very close. Waymo (google) is at about 99.999%. And that's 2016 data, they are so confident in their improvements made since then that they are now sending autonomous vehicles out without a safety driver.

What happens if your airbags detonate because of a bad sensor? What happens if your airbag cartridges have metal debris in them and you get into a minor crash? What happens if your brake lines rupture? What happens if a firmware bug in your ECU causes a sudden burst of unintended acceleration?

Driving is dangerous, and autonomous cars won't be perfectly safe. The hope is that they will be a lot safer, however.
  #68  
Old 11-22-2017, 04:58 PM
griffin1977 griffin1977 is offline
Guest
 
Join Date: Feb 2006
Posts: 2,804
Quote:
Originally Posted by gatorslap View Post

What happens when the CPU overheats? Or there's a memory leak? Or a sensor malfunctions? Or the program misinterprets an ambiguous sensor reading? Or it just plain glitches out and acts in an unexpected way?

What should happen if someone dies due to a self-driving car's software bug?
Same thing as when any other piece of software fails. We've been handling legal results of failures of important computer systems for decades (we were very nearly all wiped out because of one)

This is different to a computer system operating exactly as intended, and then choosing to kill someone.
  #69  
Old 11-22-2017, 05:00 PM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
When someone is hurt, well, there's going to be a recording of the whole incident on the vehicle's flash storage. And the matter will be settled either in arbitration or in court. Either way, the company will pay something. How much they pay will be variable depending on who the victims were, how good their attorneys were, random chance, and so on. The company will carry an insurance policy against this, the same way a trucking firm or cab firm carries insurance - though large companies will mainly self-insure as this is cheaper.

They will pass the average monthly cost of these payouts down to the autonomous vehicle renters/owners. You will have to pay a monthly or yearly subscription fee to have an autonomous vehicle. The main part of that fee will be the insurance, and the rest will cover the constant software updates and map updates needed. It's possible that some manufacturers won't even sell autonomous vehicles at all, they'll just rent them. Most individuals probably won't want to pay for their own autonomous vehicle because this will not be cheap : it makes much more sense to send it out into a pool and have it collect revenue like a taxi.
  #70  
Old 11-22-2017, 05:02 PM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by griffin1977 View Post
This is different to a computer system operating exactly as intended, and then choosing to kill someone.
You'll at least be able to play back the recording in court and show that the vehicle had to choose between killing someone outside the car or inside the car, and thus why it choose the path it did. Manufacturer would still have to pay either way, but if the decision made by the car was clearly sensible, it would mean the judge/jurors will probably not be so angry that they assign excessive punitive damages.

The link I gave is one where the jury assessed a 4.9 billion judgement against GM, who....well...have a long history of putting substandard equipment into cars that kill dozens of people.

Last edited by SamuelA; 11-22-2017 at 05:04 PM.
  #71  
Old 11-22-2017, 05:38 PM
Tom Tildrum Tom Tildrum is offline
Charter Member
 
Join Date: Apr 2002
Location: Falls Church, Va.
Posts: 13,307
Quote:
Originally Posted by Mijin View Post
A sensible driver in this situation simply does not invoke the dilemma.
I think you're too sanguine that the road is always predictable. A tire blowing out on a nearby car or something flying off a truck into the road can create unexpected dangers, for instance.

Or just consider the classic example of a child running out from between parked cars. Hard braking alone carries some risk of hitting the child. Swerving into the next lane over reduces the risk of hitting the child, but increases the risk of hitting another car. If it's a four-lane road, traffic in the next lane over is moving the same direction as you, and an accident might be minor. If it's a two-lane road, then you might face a head-on collision, but maybe there is room for the oncoming driver to stop in time, or reduce speed to render the collision likely non-fatal.

A human driver probably can't weigh all of these considerations in real time, but a computer would have more capacity to do so, and presumably would be expected to. The question then becomes how?
  #72  
Old 11-22-2017, 06:16 PM
Chunkylord Chunkylord is offline
Guest
 
Join Date: Feb 2016
Posts: 11
This is an interesting question to casually broach at a dinner party, debate group or internet forum or some such. But realistically, I think we overestimate how important this actually is for these cars. Trying to program cars with things like 'moral algorithms' and software that supposedly prioritizes a driver's safety over pedestrians is over-complicating the matter, and would just create more room for error and liability. Realistically, a self-driving car will simply slam on the breaks when it figures that a frontal collision is about to happen. No swerving or drifting or whatever. That kind of action would probably cause more harm anyway. We'll just have to rely on the prettyyyy good brake reaction times (I've read that the average break reaction time of a driver is 2.3 seconds. Would this mean that self-driving cars could effectively begin breaking an average of 2.3 seconds faster than humans? that's pretty huuuge if true!)

No one doubts that self-driving cars will have collisions and that people will die because of them. But they don't have to be perfect. They just have to be better than humans (this is not a very high bar).
  #73  
Old 11-22-2017, 09:57 PM
Mijin Mijin is online now
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 7,819
Quote:
Originally Posted by Mijin
There is not going to be some algorithm saying Driver > Fat kid > Thin kid
Quote:
Originally Posted by Wolfman
Yes there will be.
Quote:
Originally Posted by SamuelA View Post
Pretty sure there already is.
And you're both wrong.
Just think about it from a legal perspective if nothing else; you think people would be happy with an algorithm saying "Hey, I figured knocking your child over was better than killing the occupants, so..."

I have worked on safety-critical systems. Unless you're working on a military system you don't get to deliberately kill people, even to prevent others dying.
Any such harm is always implicit e.g. "If system becomes critical, allow water to overflow" (which we know may cause a storage tank to blow out, which we know may harm people in the storage facility, which we consider less important than preventing a meltdown. But nowhere in the code is "Kill the people in the storage facility")

Quote:
Originally Posted by Tom Tildrum
I think you're too sanguine that the road is always predictable. A tire blowing out on a nearby car or something flying off a truck into the road can create unexpected dangers, for instance.
Not at all. I'm not saying there are no possibilities for accidents. And indeed there are unavoidable accidents, for even a perfect AI.

What I'm dubious about is this dilemma coming up. All the examples I've heard need the AI to do something dumb first to get itself into the situation.
  #74  
Old 11-23-2017, 02:38 AM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by Mijin View Post
And you're both wrong.

What I'm dubious about is this dilemma coming up. All the examples I've heard need the AI to do something dumb first to get itself into the situation.
Please at least read our examples before making that statement. We've talked about constrained lanes, girl and boy scouts jumping suddenly out of the bushes into a live highway (which is a death sentence if you do that today with human driven cars), and situations where in a crowded highway, a crash leaves the SDV nowhere to go.
  #75  
Old 11-23-2017, 03:48 AM
Smapti Smapti is offline
Charter Member
 
Join Date: Mar 2002
Location: Swerve City, WA
Posts: 14,565
Quote:
Originally Posted by SamuelA View Post
Please at least read our examples before making that statement. We've talked about constrained lanes, girl and boy scouts jumping suddenly out of the bushes into a live highway (which is a death sentence if you do that today with human driven cars), and situations where in a crowded highway, a crash leaves the SDV nowhere to go.
In that case, the solution is "the oncoming vehicle is also self-driven, and both vehicles come to a stop without hurting anyone".

This hypothetical can only occur due to human error. Eliminate the human factor and you eliminate the problem.
  #76  
Old 11-23-2017, 04:02 AM
Mijin Mijin is online now
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 7,819
Quote:
Originally Posted by SamuelA View Post
Please at least read our examples before making that statement. We've talked about constrained lanes, girl and boy scouts jumping suddenly out of the bushes into a live highway (which is a death sentence if you do that today with human driven cars), and situations where in a crowded highway, a crash leaves the SDV nowhere to go.

Apart from a link to a webpage of miscellaneous videos, which I don't have time to watch, I've responded at length to all of the examples posed.

Last edited by Mijin; 11-23-2017 at 04:02 AM.
  #77  
Old 11-23-2017, 04:07 AM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by Smapti View Post
In that case, the solution is "the oncoming vehicle is also self-driven, and both vehicles come to a stop without hurting anyone".

This hypothetical can only occur due to human error. Eliminate the human factor and you eliminate the problem.
A human jumping into the road is a problem even a road system with 100% SDCs will not solve.

The engineers working on the safety critical system of an SDC right now - all the thousands of them probably working this week - must deal with the design requirement that the vehicle make the best decision feasible in a mixed roadway that will be mostly human driven for the design life of the system they are working on.

They don't get the luxury of working on a system where you can just default to a "safe state" without thinking about it. Maybe you worked on factory equipment, controlling a CNC machine or something. Activating the regenerative brakes on all the motor controllers on that CNC machine (or killing the power completely) always puts you into a safe state, no matter the state the CNC machine is in. Unfortunately, a lot of safety critical equipment today, there is no one perfect failure recovery path. If you have an electronics fault and you're controlling a ventilator, killing power is going to kill the attached patient. If you have a systems failure and you're controlling an electronic stabilizer for an airliner at altitude, you better have a backup because if you completely stop active stabilization, the airplane will experience "irrecoverable loss of control".

Last edited by SamuelA; 11-23-2017 at 04:09 AM.
  #78  
Old 11-23-2017, 04:12 AM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by Mijin View Post

Apart from a link to a webpage of miscellaneous videos, which I don't have time to watch, I've responded at length to all of the examples posed.
"I have worked on safety-critical systems. Unless you're working on a military system you don't get to deliberately kill people, even to prevent others dying.

What I'm dubious about is this dilemma coming up. All the examples I've heard need the AI to do something dumb first to get itself into the situation.
"

You falsely claim it's the AI's fault if people jump in front of the autonomous car. I would like you to elaborate on this claim, as we have specifically given examples where it is the fault of the people who did the jumping.
  #79  
Old 11-23-2017, 04:51 AM
Mijin Mijin is online now
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 7,819
Quote:
Originally Posted by SamuelA View Post
You falsely claim it's the AI's fault if people jump in front of the autonomous car. I would like you to elaborate on this claim, as we have specifically given examples where it is the fault of the people who did the jumping.
Not the AI's fault for being in a situation where a possible accident might occur, the AI's fault for ending up in a constrained situation, where it faces the dilemma of "Who do I kill?"

For example, Wolfman, I think, posted the video of a truck suddenly cutting across the driver's lane, and the driver being forced to swerve into the hard shoulder.
And he's right that that's a common kind of incident to occur, as drivers often don't realize there's a car in the lane beside them, in their blind spot.
There are things you can do to reduce the risk of someone cutting across you (e.g. don't linger in another vehicle's blind spot), but as we saw in the video, the risk is there even if you don't do anything wrong.

Unfortunately though, this doesn't invoke the dilemma. An AI just needs to swerve like the driver did. So then the hypothetical was proposed where now there's a parked car in the hard shoulder, so the AI would have no escape route.

But the problem with this change is that any good driver would now have cause to be extra cautious: I don't want to be passing two cars at nearly the same time right next to an exit. Let alone the fact that, legally-speaking, you're supposed to slow down if passing a broken-down car on the freeway anyway.
This is what I mean (and did already explain) about why these examples don't work.

Last edited by Mijin; 11-23-2017 at 04:55 AM.
  #80  
Old 11-23-2017, 05:12 AM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by Mijin View Post
Not the AI's fault for being in a situation where a possible accident might occur, the AI's fault for ending up in a constrained situation, where it faces the dilemma of "Who do I kill?"
The physical roads that exist in the United States have the possibility of these constrained situations all over. They may be rare but they do exist. If you're just going to keep denying that this can ever happen :

a. I have to doubt your claimed qualifications. Programmers are used to having cases where an extremely rare condition may happen. It's easy to just not handle that condition, assuming it's so rare it will never happen. When you do that, usually what you find out later is that this "rare" condition is happening all the time and you now have a bug in your ticket queue.

b. I have to doubt your driving experience. If you've driven a car, you would trivially see that this kind of thing happens all the time and if you had perfect sensors that were still only attached to the vehicle you were on, you would not be able to avoid every possible situation.

Last edited by SamuelA; 11-23-2017 at 05:13 AM.
  #81  
Old 11-23-2017, 09:14 AM
Mijin Mijin is online now
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 7,819
Quote:
Originally Posted by SamuelA View Post
The physical roads that exist in the United States have the possibility of these constrained situations all over. They may be rare but they do exist.
I simply made the observation that none of these common trolley examples actually "work" (in that a defensive driver could conceivably find themselves in such a situation)

I haven't ruled out that such a situation might be hypothetically possible, but I'm waiting for an example. I find it amazing that this topic has been talked about so much, and yet the examples often cited just don't work.

Quote:
a. I have to doubt your claimed qualifications. Programmers are used to having cases where an extremely rare condition may happen. It's easy to just not handle that condition, assuming it's so rare it will never happen. When you do that, usually what you find out later is that this "rare" condition is happening all the time and you now have a bug in your ticket queue.
This isn't even rational so I'm not even sure how to respond.
If a hypothetical is flawed and won't actually happen, then it won't actually happen.
And meanwhile, some very rare conditions are indeed very rare and don't magically become common.

Plus of course you ignored the points I made. Do you not think a company would be criminally liable if they actually had code that said "Kill A to avoid killing B"?

Quote:
b. I have to doubt your driving experience. If you've driven a car, you would trivially see that this kind of thing happens all the time and if you had perfect sensors that were still only attached to the vehicle you were on, you would not be able to avoid every possible situation.
Well I've driven cars for decades, and now, like I say, I drive a motorcycle in a notoriously dangerous city in terms of traffic fatalities.
Strange and dangerous actions happen all the time around me and it's necessary to give myself the space and time to react. Like I say, one very simple principle is: "The more constrained my situation becomes, the slower I ride". You don't speed into a narrow path where you're sweet out of options if one of the drivers around you does something stupid. Because that's often what happens.

And again, I responded in detail how a responsible driver should approach these situations. And no-one has pointed out any error in the approach that I described.

Last edited by Mijin; 11-23-2017 at 09:18 AM.
  #82  
Old 11-24-2017, 05:26 PM
griffin1977 griffin1977 is offline
Guest
 
Join Date: Feb 2006
Posts: 2,804
Quote:
Originally Posted by Mijin View Post
I haven't ruled out that such a situation might be hypothetically possible, but I'm waiting for an example. I find it amazing that this topic has been talked about so much, and yet the examples often cited just don't work.
I showed you several examples. I am truely amazed (as some who drives on highways and cities all the time) that you don't understand the pretty basic fact that if you drive a car you might end up in a situation where you end up killing someone (including yourself). Yes, defensive driving will reduce that chance, but it happens. Everyone who has ever driven for long enough has encountered situations close to this (I know I have, several occasions where if object X was slightly closer or to the right I would have hit it).

This is not some crazy edge case, its a thing that is happening right now to someone on a road somewhere. You are driving along, at the speed limit, observing what's around you, following all road laws. Your lane becomes blocked by an object that is closer to you than your breaking distance, there are objects in the lane to your right and left. You can either hit the object in front of you, the object to your left or the object to your right. As humans though we aren't going to carefully analyse the ethical implications of hitting those things, we just react.

Quote:
Originally Posted by Mijin View Post
. You don't speed into a narrow path where you're sweet out of options if one of the drivers around you does something stupid. Because that's often what happens.
Because we are humans and aren't going to have time to analyse the fact that the narrow alley is there, big enough to fit us, unobstructed, etc. etc. in the
  #83  
Old 11-24-2017, 06:28 PM
Sam Stone Sam Stone is offline
Member
 
Join Date: Jun 1999
Posts: 26,736
A few years ago I was driving in the Rocky Mountains, and a large rock rolled onto the road right in front of me. I could have swerved to the right to avoid it, but I had been looking at that shoulder for miles as it was soft and crumbly and a long drop on the other side. Also, it was winter and even though the road looked okay there was black ice around.

So, I elected to just hit the damned rock. There was a massive bang and the car lurched to the right anyway, but only part way onto the shoulder, which held fine.

Now, if an AI had been driving. Would it understand that the shoulder was soft and dangerous? Would it understand that there was a large drop on the other side? Would it assume there might be black ice? Or would it attempt the swerve?

But there is a more important point. Suppose it swerves and goes over a cliff and kills the passenger. Who is liable? What if it doesn’t swerve, but hitting the rock throws the car out of control and kills the occupants?

It seems to me that in a case like this, no matter what the car chooses to do, if there is an injury or a fatality there are legal ramifications. As humans. We tend to give other humans a certain amount of moral/legal license when split-second decisions go bad. Are we going to do the same for automated cars?

The legal issues could kill self-driving cars just as they killed the Segway as a mass market people mover. Once cities started regulating Segways like bicycles, that was the end of the dream. So it may be with driverless cars. Or maybe not - this is an area where the future is completely unpredictable.
  #84  
Old 11-24-2017, 06:49 PM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by Sam Stone View Post
Now, if an AI had been driving. Would it understand that the shoulder was soft and dangerous? Would it understand that there was a large drop on the other side? Would it assume there might be black ice? Or would it attempt the swerve?

But there is a more important point. Suppose it swerves and goes over a cliff and kills the passenger. Who is liable? What if it doesn’t swerve, but hitting the rock throws the car out of control and kills the occupants?

It seems to me that in a case like this, no matter what the car chooses to do, if there is an injury or a fatality there are legal ramifications. As humans. We tend to give other humans a certain amount of moral/legal license when split-second decisions go bad. Are we going to do the same for automated cars?

The legal issues could kill self-driving cars just as they killed the Segway as a mass market people mover. Once cities started regulating Segways like bicycles, that was the end of the dream. So it may be with driverless cars. Or maybe not - this is an area where the future is completely unpredictable.
You're right in that the future is unpredictable. I initially thought that the legal issues would kill SDCs in their infancy. See, the problem is, suppose you're Waymo corporation facing a lawsuit for someone your SDC killed. But your SDC is amazing. 19 people for every person you got killed owe their lives to you. Well, in a court of law, you don't get credit for that, and technically, a jury/judge (depends on the state) can assess infinite damages (though this might become a matter for Federal courts and subject to review by Federal judges which might keep it reasonable) for killing just one person. Even though you saved 19.

And if it's a 1 billion judgement, Waymo has to pay it. They have the cash to pay it, and can be forced through actions by law enforcement to pay up, ultimately.

Contrast this to a world where all 20 people died from human driven cars. Nearly all humans have barely any assets, so for each of those deaths, the responsible driver (or their estate) is usually not even deep pocketed enough to be worth suing. In many states, the liability insurance can be as little as 30k.

The corporations trying this are the most powerful in America, though, and technical solutions exist where you could reasonably argue the car is mostly blameless even when someone is killed. After all, there's going to be detailed video for nearly* every fatal crash an SDC was involved in, from multiple angles, and a log of what the computer thought the situation was and why it choose to do what it did.

There should be legislative protection. I think if an SDC can be shown to be, say, 10 times safer than humans, then there should be protection similar to the protection given to vaccine manufacturers. Vaccines save countless lives, but occasionally do harm a recipient. So if you're a vaccine victim, you can't sue - the law gives the drug company that made it immunity. You go to a board and will be given a certain amount of money depending on the injury.

*the data recorders are flash drives embedded in the car somewhere, obviously some extreme accidents could destroy them, though presumably they will be shielded against fire and probably mounted under the driver's seat or down in the floor somewhere.

Last edited by SamuelA; 11-24-2017 at 06:52 PM.
  #85  
Old 11-24-2017, 07:03 PM
Richard Pearse Richard Pearse is offline
Member
 
Join Date: Aug 2001
Location: On the outside looking in
Posts: 9,978
Quote:
Originally Posted by griffin1977 View Post
I showed you several examples.
Do you mean the video of the Tesla supposedly saving its occupants from crashes? Those were poor examples that would be avoided by defensive driving, something the Tesla autopilot was not doing in each case, which I find surprising.
  #86  
Old 11-24-2017, 07:14 PM
Ravenman Ravenman is offline
Charter Member
 
Join Date: Jan 2003
Location: Washington, DC
Posts: 21,990
Quote:
Originally Posted by SamuelA View Post
Well, in a court of law, you don't get credit for that, and technically, a jury/judge (depends on the state) can assess infinite damages (though this might become a matter for Federal courts and subject to review by Federal judges which might keep it reasonable) for killing just one person. Even though you saved 19.
You need to come up with a cite for this right here. All of it.

Last edited by Ravenman; 11-24-2017 at 07:14 PM.
  #87  
Old 11-24-2017, 07:46 PM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by Ravenman View Post
You need to come up with a cite for this right here. All of it.
Which part? http://www.nytimes.com/1999/07/10/us...tank-case.html

Here's a cite that is an example of the upper end. Plenty of million dollar judgements against automakers. But, today, cars don't record the exact sequence of events from multiple camera angles. There's uncertainty, uncertainty that the attorneys defending the automaker can exploit. An autonomous car causing a fatal accident where the car was under complete control and the recording survives leaves no uncertainty.

And under what legal mechanism does the manufacturer get credit for saving 19 people? Let's suppose I'm a mad scientist and I invented a cure for cancer that works. Instead of waiting on the FDA, I just dress up like a doctor, got to a cancer ward, and inject my cure in 20 patients. And it works and 19 patients are cancer free. Doesn't protect me from charges for murdering the last one...and potential life in prison for that, even though I saved 19 people.

Last edited by SamuelA; 11-24-2017 at 07:50 PM.
  #88  
Old 11-24-2017, 08:17 PM
Ravenman Ravenman is offline
Charter Member
 
Join Date: Jan 2003
Location: Washington, DC
Posts: 21,990
How about the part where the law “doesn’t give credit” for saving 19 people during a car accident.
  #89  
Old 11-24-2017, 08:41 PM
Kropotkin Kropotkin is offline
Guest
 
Join Date: Jun 2014
Location: North
Posts: 578
Quote:"Basically, instead of thinking of the problem like a series of edge cases, and then insisting that some edge cases are so unlikely that they will never happen in the next 30 years with hundreds of millions of autonomous vehicles on the road, just look at the algorithm.

You want to get the outcome of :

minimize damage to the occupant of the vehicle
minimize damage to other people
minimize damage to the vehicle"

Let's just reverse 1 and 2, then, since the reason for this order seems to be "no one will buy our cars if we did it the other way," which is another way of saying "let the market decide," which is pretty much saying "who has the gold makes the rules." Well, to heck with them. In fact, may I modestly propose that in any circumstance where the vehicle has to choose between the occupant and anyone else, including irresponsible scout troops, it simply activates the auto-destruct and reduces vehicle and occupant to atoms? I'm not saying we wouldn't get our hair mussed, but it's one less resource-guzzling appliance off the road. My Birkenstocks must be around here someplace....
  #90  
Old 11-24-2017, 08:48 PM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by Kropotkin View Post
Quote:"Basically, instead of thinking of the problem like a series of edge cases, and then insisting that some edge cases are so unlikely that they will never happen in the next 30 years with hundreds of millions of autonomous vehicles on the road, just look at the algorithm.

You want to get the outcome of :

minimize damage to the occupant of the vehicle
minimize damage to other people
minimize damage to the vehicle"

Let's just reverse 1 and 2, then, since the reason for this order seems to be "no one will buy our cars if we did it the other way," which is another way of saying "let the market decide," which is pretty much saying "who has the gold makes the rules." Well, to heck with them. In fact, may I modestly propose that in any circumstance where the vehicle has to choose between the occupant and anyone else, including irresponsible scout troops, it simply activates the auto-destruct and reduces vehicle and occupant to atoms? I'm not saying we wouldn't get our hair mussed, but it's one less resource-guzzling appliance off the road. My Birkenstocks must be around here someplace....
Then all you'd have to do to murder someone is toss a dummy or someone you don't like in front of an autonomous car on a mountain road. Car, in order to save the person outside the vehicle, drives either into the mountainside or off the edge.

Last edited by SamuelA; 11-24-2017 at 08:50 PM.
  #91  
Old 11-24-2017, 10:21 PM
Sam Stone Sam Stone is offline
Member
 
Join Date: Jun 1999
Posts: 26,736
Or, if the car is programmed to save the passengers, all you have to do is push your victim in front of the car, which will drive over him.
  #92  
Old 11-24-2017, 11:25 PM
Mijin Mijin is online now
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 7,819
Quote:
Originally Posted by griffin1977 View Post
I showed you several examples.
Right, and I responded to those examples. What has not happened yet is anyone pointing out any error in my responses.

This is why I'm still saying these hypothetical dilemmas don't work; a safe driver would not find themselves in these situations.

Quote:
I am truely amazed (as some who drives on highways and cities all the time) that you don't understand the pretty basic fact that if you drive a car you might end up in a situation where you end up killing someone (including yourself). Yes, defensive driving will reduce that chance, but it happens.
I am truely amazed that you're saying that after I said: "I'm not saying there are no possibilities for accidents. And indeed there are unavoidable accidents, for even a perfect AI." [Emphasis in original]

All I have said is that all the examples so far of an AI needing to choose to kill Bill or Ben don't work; they rely on the AI doing something irrational first.
I don't even rule out that some kind of dilemma situation might be hypothetically possible; I'm just saying none have been demonstrated so far. Which is fascinating for a problem which has been discussed so, so much here and in the media.

Quote:
This is not some crazy edge case, its a thing that is happening right now to someone on a road somewhere. You are driving along, at the speed limit, observing what's around you, following all road laws. Your lane becomes blocked by an object that is closer to you than your breaking distance, there are objects in the lane to your right and left. You can either hit the object in front of you, the object to your left or the object to your right. As humans though we aren't going to carefully analyse the ethical implications of hitting those things, we just react.
Firstly why I am travelling the speed limit if I'm boxed in?
Secondly, the proper distance to keep from the car in front is far enough that you can brake in time. So it can't be a "blockage", you mean a car swerving across, say, and that is something I can be alert to; I can notice the positions of cars in the lanes beside me, and be very concerned anytime I'm in the blind spot of a car travelling approximately the same speed as me.

But finally, yes of course there are hypothetical situations where all you can do is brake hard and brace for impact, and that's what an AI would do too.

Quote:
Because we are humans and aren't going to have time to analyse the fact that the narrow alley is there, big enough to fit us, unobstructed, etc. etc. in the
Not sure where you were going with this unfinished sentence, but again I have to emphasize that there is no leap of faith when riding my bike, otherwise I'd probably already be dead. I'm being serious about that.

If I'm boxed in, I slow down. If I don't know whether I'm about to be boxed in, I slow down until I've evaluated the situation. I'm not afraid to (gently) stop if something very unusual is happening in front of me. You do not speed into dangerous situations and hope for the best.
  #93  
Old 11-25-2017, 01:01 AM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by Sam Stone View Post
Or, if the car is programmed to save the passengers, all you have to do is push your victim in front of the car, which will drive over him.
Only if there's no other place for the car to go. If so, I consider that a reasonable outcome. If you jump in front of a city bus, nobody expects for the bus driver to swerve the bus into the sidewalk or a lamp post or something to avoid killing you. It's a cliche, even - jump in front of a city bus, and you'll be lucky if the driver hits the brakes before killing you.
  #94  
Old 11-25-2017, 01:03 AM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by Mijin View Post
Right, and I responded to those examples. What has not happened yet is anyone pointing out any error in my responses.


If I'm boxed in, I slow down. If I don't know whether I'm about to be boxed in, I slow down until I've evaluated the situation. I'm not afraid to (gently) stop if something very unusual is happening in front of me. You do not speed into dangerous situations and hope for the best.
Your responses are bullshit. You assume a form of constraint on the problem that doesn't exist. You are assuming because you can indeed drive - or ride - very smartly, avoiding most situations where a crash is unavoidable, that somehow your personal experiences will be true on a hundreds of millions of vehicle scale. This is incorrect. Your personal experiences are as worthless as a lottery player talking about how they never win.

None of us are saying that totally unavoidable, no good choice situations are going to be common for autonomous cars. Hell, the average driver may not have a no good choice situation happen in their entire driving lifetime. But they are possible, and you need an algorithm that can find the best choice when no perfect choice exists.

Last edited by SamuelA; 11-25-2017 at 01:05 AM.
  #95  
Old 11-25-2017, 04:59 AM
Richard Pearse Richard Pearse is offline
Member
 
Join Date: Aug 2001
Location: On the outside looking in
Posts: 9,978
Quote:
Originally Posted by SamuelA View Post
Hell, the average driver may not have a no good choice situation happen in their entire driving lifetime. But they are possible, and you need an algorithm that can find the best choice when no perfect choice exists.
I'd say the average driver would almost certainly never have such a situation in their lifetime. You are positing incredibly unlikely scenarios.
  #96  
Old 11-25-2017, 05:10 AM
Mijin Mijin is online now
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 7,819
Quote:
Originally Posted by SamuelA View Post
Your responses are bullshit. You assume a form of constraint on the problem that doesn't exist.
You are assuming because you can indeed drive - or ride - very smartly, avoiding most situations where a crash is unavoidable, that somehow your personal experiences will be true on a hundreds of millions of vehicle scale. This is incorrect. Your personal experiences are as worthless as a lottery player talking about how they never win.
I'm not even sure what your "bullshit" point even is, so let me summarize again what I'm saying.
I've only made 2 points in this thread:

1. The examples of dilemmas a self-driving car would face, don't work i.e. they rely on the AI doing something stupid first. Note, I'm not making the claim no such situation is possible, only that no example has been given yet.
2. Self-driving AI is not going to intentionally take lives, no matter what happens. It would be a legal nightmare. In a absolute worst case where no collision can be avoided, it will just brake hard.

What part of this do you disagree with?
And what part of this implied I thought all accidents could be avoided (especially since I've said the opposite, explicitly, twice now)?
  #97  
Old 11-25-2017, 09:20 AM
griffin1977 griffin1977 is offline
Guest
 
Join Date: Feb 2006
Posts: 2,804
Quote:
Originally Posted by Mijin View Post



Firstly why I am travelling the speed limit if I'm boxed in?
I meant below the speed limit (as in not speeding)

Quote:
Originally Posted by Mijin View Post
Secondly, the proper distance to keep from the car in front is far enough that you can brake in time. So it can't be a "blockage", you mean a car swerving across, say, and that is something I can be alert to; I can notice the positions of cars in the lanes beside me, and be very concerned anytime I'm in the blind spot of a car travelling approximately the same speed as me.
You can't ever keep a "proper distance" that will prevent all crashes. It is delusional to think you can. There were always be case where a object appears in front of you too quickly for you to stop for countless reasons (e.g. the car in front of you crashes, a car enters your lane, a pedestrian crosses street, a truck reverses out, a car leaves oncoming lane, etc. etc. ) Additionally there is no guarantee (as in those cases) that you aren't going to see the stationary/oncoming object at the same time as the you see the objects on your left or right (and the objects don't need to be cars, do you slow down to walking speed on the freeway every time you pass a bridge?)

That is just something you have to accept if you are driving. You can minimize the chances of that happening by driving defensively. But they happen, they aren't weird edge cases, they are actually happening to someone right now somewhere in the world.
  #98  
Old 11-25-2017, 10:41 AM
Mijin Mijin is online now
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 7,819
Quote:
Originally Posted by griffin1977 View Post
You can't ever keep a "proper distance" that will prevent all crashes. It is delusional to think you can.
I said proper distance from the car in front, i.e. in your lane.
You always should keep that car out of your braking distance, and I'm interested to hear any excuses for not doing so.

Quote:
Additionally there is no guarantee (as in those cases) that you aren't going to see the stationary/oncoming object at the same time as the you see the objects on your left or right
Objects appearing in 3 directions at once, without me being able to anticipate any of them? Like when?

Quote:
(and the objects don't need to be cars, do you slow down to walking speed on the freeway every time you pass a bridge?)
Again, what's a safe speed depends on what kind of road we're talking about. On the freeway, lanes are very wide, bends are very gentle, and if there are bridges, they are usually far from flush with the side of the road.

But if the freeway were suddenly to narrow, such that I had to pass under a bridge that had pillars just a couple metres from the side of the road (as required for a human to conceivably run out in time in the last seconds), yeah I'd slow down. How much I slow down would depend on how narrow we're talking about.

Quote:
You can minimize the chances of that happening by driving defensively. But they happen, they aren't weird edge cases, they are actually happening to someone right now somewhere in the world.
Yes obviously. It's getting annoying to have to re-confirm in every post that there are undoubtedly unavoidable accidents. I never said otherwise.
  #99  
Old 11-25-2017, 12:59 PM
bordelond bordelond is offline
Guest
 
Join Date: Dec 1999
Location: La Rive Ouest
Posts: 9,288
Quote:
Originally Posted by griffin1977 View Post
Here is a compilation of Teslas avoiding crashes. In many of these examples a slightly different layout of the vehicles or road would have resulted in a "trolley problem" decision.
https://www.youtube.com/watch?v=--xITOqlBCM
I like the one at 40-someodd seconds. There is a double human error compounding the situation:

a) a car clearly missing its exit stopping cold on the interstate (as opposed to the unoccupied shoulder) to make the exit ramp at the last possible second.

b) the car behind the car above having both red tail lights out (the only light that lit up was the rear right yellow blinker).

And the AI still misses the wreck.

...

It sucked that the pig got clipped, but it was still able to run away immediately at speed. If hurt, it couldn't have been catastrophically bad.
  #100  
Old 11-25-2017, 01:33 PM
SamuelA SamuelA is online now
Guest
 
Join Date: Feb 2017
Posts: 1,622
Quote:
Originally Posted by Mijin View Post
Yes obviously. It's getting annoying to have to re-confirm in every post that there are undoubtedly unavoidable accidents. I never said otherwise.
Then what are we arguing about? All we're talking about in this thread is what to do when the accident is unavoidable.

Also, another detail that you probably don't realize, but the way an AI sees the world isn't quite like you think. Instead of perceiving objects as having a definite, solid presence in a specific spot, a better method is to see the object's position and boundaries as having a probability distribution. More like a cloud of possibilities. This is because there's sensor error, vehicle control error, the object moving on it's own, and other factors that lead to a certain amount of uncertainty.

So when the car plans a path, it's actually discretely adding up these hypothetical collisions with the object even if there's only a 1% chance that say the red car in the other lane is actually 1 foot to the right or is going to swerve suddenly in to us.

So in the math, at a very low level, it needs to value these collisions properly, even if they virtually never happen. It needs to weight by the velocity difference, the amount of armor this particular car has for collisions from that angle, which seats are occupied by passengers, and so on.

In addition, there's always going to be a nonzero chance of classification error. That snow falling might actually be a wall. That distant object might be another car, not an object on the side of the road. One way we can work this out is to collect data on what the classifier thought an object was in earlier observations from farther away and what it corrected itself to, and both improve our model, and also store how often this happens, so that we have an actual, usable uncertainty. The math of planning actually works just fine if we're only 70% sure that distance object is another vehicle and 30% it's a side of the road obstacle. We can choose a course of action that is optimal for the union of the two cases.

Last edited by SamuelA; 11-25-2017 at 01:38 PM.
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 10:25 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2017, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@chicagoreader.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Publishers - interested in subscribing to the Straight Dope?
Write to: sdsubscriptions@chicagoreader.com.

Copyright © 2017 Sun-Times Media, LLC.

 
Copyright © 2017