Re "self driving cars" people talk as if this is a few years away. This seems nuts to me.

From what I’ve seen, Uber drivers screw up when they ignore the Uber GPS. They frequently try to take their own shortcuts.

Well, the disagreement isn’t so much much whether it will ever happen, it’s about whether it will happen soon. The naysayers believe that some of the challenges to be overcome will require longer than five years, the more positive types think they’re not so difficult.

You can give a car the abilities of a Kinect but it must be far more robust than a Kinect because the consequences of failure are much higher, even if it just means a car gets stranded in a paddock, that could cause a major headache for people directly involved. If a Kinect fails to correctly interpret a gesture, nothing happens, no one cares.

I think using people to direct autonomous cars around a temporary parking area is an incredibly inefficient way of tackling the problem, and that we’d be better off coming up with a solution that does away with the parking attendant completely.

Of course if you keep manual controls for the human then you don’t need to worry about these fringe cases, when the car gets to the limit of its capability it hands over to the human, and the human does the bit the car isn’t programmed for. You use the car for the predictable driving tasks, the stuff a computer is good at and people aren’t as good at, and you use the person for the fuzzy fringe cases. Things, that yes, the computer may theoretically be able to be programmed for, but it may be impractical to do so.

One option which has been discussed up thread is to include a manual override setting. Perhaps that’s a setting not intended for extended use, or high speed use on a public road, but will allow a human driver to input direction and speed to the computer at times when driving needs are outside the computer’s abilities. The computer may still exercise emergency control, such as not running over a pedestrian, but would otherwise allow manual movement.

An alternative is what I just found here where Land Rover is researching autonomous off road driving capabilities. The off road capable vehicle that you buy for your rough terrain area may come equipped with autonomous off road capabilities that wouldn’t be needed in a Camry, and which can handle the situations you describe.

At any rate, I could see a day where self driving is required for all new cars, and human driving is allowed only on an exception basis, i.e. for cars made before self driving was available. Even in this case, I’d be 100% OK with people being able to get the ‘Human Drive’ option (full instrument controls) for times when they aren’t on public roads so they can do whatever their hearts desire.

Tesla just sent out a video of one of their vehicles navigating through stopsigns, stoplights, and Palo Alto traffic, and then finding a parking spot. Seems to be all running on hardware shipping with their current (as of today) production vehicles.

Video seems pretty impressive to me. Granted, google has been doing this sort of stuff for awhile, but this on cars that can be purchased right now.

8 cameras, 12 ultrasonic sensors, and a forward radar.

Sure I’m not an engineer solving the problems of driverless cars but I’m sure I can think up some scenario that they’ll never think of trying to solve because my understanding of their technology is superior.

hmm… Palo Alto.
Sunshine and clearly marked lanes on open roads; and when the driver steps out, the car goes off alone to the convenient parking spot nearby.

I wonder how long we’ll have to wait to see a similar video–navigating downtown Chicago during a snowstorm; and then the car goes off to park itself in an underground parking garage, pausing to reach out to the machine to take the ticket with the time stamp.

It’s impressive technology–and there’s a big market of people who will buy a robo-car,even if it is very limited and can only be used for simple tasks in good weather.

But regular cars will still a majority of the vehicles on the road for the next 50 years or more.

I can imagine several ways that parking garages can adapt to allow for self-driving cars to park themselves.

Self-driving cars already use parking structures.

That’s a good summarization of how I feel about it. We really would need a fully functioning general AI to have truly good adaptable self-driving cars. At that point maybe it would be more efficient to just have a human-shaped AI robot to sit in the driver’s seat so it could work with any vehicle.

The only reason to favor a humanoid robot driver is if you need, for some reason, to automate the driving of older cars. Because otherwise, it seems “simpler” just to design an autonomous car that operates independently of the manual controls, or has no manual controls at all.

Funny enough, in aviation they’re building a robot pilot that’s roughly anthropomorphic.

The intent is to be able to retrofit it into manned aircraft with less hassle. You strap the main box into the seat, attach “arms” and “legs” to the physical controls then interface it to the aircraft’s databus so it can “read” the instruments and “push” the buttons. Viola as the French don’t say.

For sure this is military late-stage research, not something intended for public use next year. But the idea has occurred to people.

Otto Pilot?

As with many people above, I don’t believe that anytime soon cars will be able to deal with all the unusual situations that occur. Can it accept traffic directions from a police officer? Can it decide to swerve into a ditch to avoid a child in the road or a giant hunk of concrete, but to run over a dog or a cardboard box? Can it tell the difference between a large puddle and a dangerously flooded road?

I would worry about sudden, unexpected failures of the software. Perhaps you drive the car in an environment unlike any place it was tested in, and the pattern-recognition and such fails. Maybe the car drives you straight into some muddy water thinking it’s a dirt road, or onto the roof of a building, or onto a tree canopy next to a cliffside road.

The possibility of hacking worries me most, if self-driving cars become universal. What if something got access to Tesla’s computers that they use to write the software, and inserted malicious code? It could be a Stuxnet-like virus, or a foreign agent or disgruntled employee. Some subtle bit of obfuscated code gets hidden in a software update, and then one day at a particular time, all of the affected cars crash into the nearest obstacle at maximum speed. The likelihood of that happening might be low, but the damage would be enormous, potentially millions dead. Or they cause massive economic damage by simply not working. What if 2/3 of the semi trucks in the country were all bricked at the same time?

How many updates to windows have contained malicious code inserted by “disgruntled employees”. There are far more vulnerable channels than the embedded systems in a car for causing massive society wide problems. It is a fairly simple matter for updates to software to require some manual process or to be installed or rolled out to a small cross section of vehicles at a time. We are also referring to a dozen competing manufacturers writing their own software for their cars. In the lottery grade odds of an explot getting through, the chance of an exploit having any meaningful effect on multiple brands is diminishingly tiny to the point of non existence.

Most of your concerns have already been torn apart in other threads, most situations you mention the computer is already more likely to see coming and stop. The computer reacts way faster than any human being. If the computer was unable to stop in time to avoid hitting a little kid, a human driver in many cases would just be realizing they ran over a kid and might want to try and stop now.

For every potential problem scenario you suggest above we can also suggest an equivalent situation where autonomous functionality removes the possibility of driver error.

Overall, given that no situation is free from risk, is autonomous driving safer than purely human?

I keep trying to tell people that the future is now and they keep not believing me, but (creepy little girl voice) [they’re here:

](http://bigstory.ap.org/article/uber-self-driving-cars-hit-streets-san-francisco)
As usual, however, Uber is flouting the law by simply claiming that they aren’t doing what they are clearly doing:

Seriously, fuck this company.

Agree that Uber are pirates and ought to be shut down.

But if self-driving cars can hack it in San Francisco, they’ll do fine in pretty much all of urban and suburban America, plus interstates and small towns. Which covers the vast majority of the population, albeit not much of the total land area.

Hey, look! California Tells Uber To Stop Rides In Self-Driving Cars!

A new development has just been announced.

http://www.nytimes.com/2016/12/13/technology/cars-talking-to-one-another-they-could-under-proposed-safety-rules.html

More details:

unsurprising, since Uber is a company built upon lies.