Driving while distracted, what about driving while human

Now that technology has progressed that computer control is safer then human control of a vehicle, why are you still driving manually? Why have you risked others because you felt you can drive safely - though study after study that computer autopilot of vehicles is far safer then human controlled vehicles?

Even if you turned off your smartphone so you would not be distracted, you knew the risk of driving without autopilot is much higher compared computer control. You could have safely been txting or even eating a Big Mac ™ behind the wheel if you used autopilot, you could also have been given partial control with the computer only taking over if you are a hazard, yet you forgoed all this to drive manually and in doing so have become a danger to society. You even didn’t wear eyeware that have the ability to project the message that a hazard is upcoming.

You have been deemed a menace to society, you will no longer be allowed to drive without full computer control on public roads.

How long till we get to the point that we stop worrying about txting while driving and cars are safe enough to drive themselves, actually safer then us driving them?

Do you fear technology or embrace it?

Which studies are you referring to?

The initial study done by the earth science academy published March 18th 2015 and confirmed many times over since then, why do you ask?

This supports my original hypothesis that **kanicbird **is a time-travelling outside observer.

I believe in the case of Megazone X27A v. Jackson (2150), the Central Command Omnitelligence found that the defendant, could safely manually operate ground-based vehicles under certain conditions, even though he was considered “human” under the Cyborg Anti-discrimination Act of 2075.

Assuming the studies are real, safety is only part of the question. I think most rational people would embrace safer cars.

The more significant question is cost. At $100 per car something that is safer probably makes sense. At $100,000 it probably doesn’t. I image the technology (assuming again it currently exists) comes at a cost that is much higher than the benefit and for that reason it has not been adopted.

Four words: Blue Screen of Death

So how did this post from the far future wind up on SDMB in 2013? The idiot was self-driving his Alcubierremobile, intending to travel from Earth to Eminiar VII (estimated Alcubierre driving time, 6 days or so) but made a wrong turn at Alpha Centauri (thanks, GPS) and landed in Chicago, 2013. Now he’s irreversibly stuck here, and his family wonders what became of him.

So, yeah, computer-driven driving is the coming thing. Might as well welcome your new road trip overlords, for resistance will be futile.

Let’s say a car controlled by an automated system is 100 times less likely to be in an accident. Let’s say said automated system controls a 10,000 cars.
Let’s say said automated system is compromised, either accidentally or on purpose.

So when these self-driven cars do make mistakes (and they will, however less so than human drivers) and cause accidents, who is at fault in these crashes? If there are injuries, who pays for everything? The car manufacturer?

Surely there is an analogue already in the airline industry? Assuming the autodriver system mirrored what happens in aviation today, it doesn’t matter how automated your car is, the driver would be ultimately responsible for any accidents. Autodriver screws up? Human driver didn’t monitor adequately and take over at the appropriate time. Likewise I’d expect that regardless of how safe the autodriver was, the human driver would be expected to drive manually enough to maintain proficiency at driving, but only when it is appropriate to do so. There may be certain freeways that are autodriver only and if the autodriver fails in some way then the human driver must take the next exit and use the slower less direct non-autodrive freeway.

There was another nearby entire thread, just a week or so ago, on exactly this question.

Off topic, but did you ever take a tour of the disintegration chambers on Eminiar VII? Very moving.

Machines don’t “make mistakes”. They do what they are programmed to do and sometimes they are either programmed incorrectly or otherwise malfunction. So “fault” would largely depend on whether an incident was caused by a manufacturing or programming defect or a failure on the owner’s part to have the car routinely inspected and maintained. Like failing to bring the car in every seven years to have the positronic brain replaced before it goes insane.

Buried somewhere deep in the license agreement:

Neither Microsoft, nor any of its subsidiaries, shall be held liable for damages caused by the use of Windows Commute. Driver assumes all responsibility for accidents, property damage, injury or loss of life that may result from use of this product.

Click OK to accept these terms.

[ ] OK
[ ] CANCEL

Note: clicking CANCEL will render this vehicle inoperable.

Your making the assumption that it would be under centralized control, not semi-independant distributed control.

As for liability - perhaps it’s time has come too, this came to me while hearing about the Metro North train derailing. The conductor fell into a ‘daze’ and didn’t slow down in time to make the 30 mph turn.

In driver’s education and defensive driving class, they speak of and caution the driver of a ‘highway , hypnosis’ where you become accustomed to the high speed and sort of zone out a bit.

This is a normal thing for humans to do, how much more so would it be for a train operator with only a throttle control, and a regular route. It’s part of being human.

So what is the solution for Metro North that could have prevented this?.. My answer is technology. It seems it could have been simple to install a overspeed and auto breaking device on the trains that could have prevented a disaster.

Instead of blaming the conductor for being human, why not work towards finding the problem and correcting it. Just replacing the conductor with another human does not solve the problem. Honestly technology itself doesn’t solve the problem either, but if it’s safer that’s what we should be working towards.

Back to the road:

Insurance should have little trouble insuring autodriving cars that are statistically safer then human controlled ones. Therefor some form of compensation should be available without needing to find someone to blame.

Due to technology advances, this may become a phrase with literal meaning.

an earlier thread in either August or April wondering the same thing with regards to limiting train speeds. apparently the technology is already in use in Europe.

If this means insurers are prevented from considering consumer credit in their rating policies again, I’m all for it.

I don’t think that it has been established that these cars are safer (not that I am questioning your site of a report that will be released in the future).