How do self-driving cars deal with stoplights?

ETA: @Tired & Cranky mostly, with a side of Chronos

Granted completely.

What I’m pointing out is simply that in all these discussions of AI cars, and in other discussions of crowdsourcing, folks tend to forget blackhat crowdsourcing.

I don’t doubt that cars themselves can be made almost entirely hackproof. As you say, they’re not yet. But as the necessity arrives, so will sufficient armor plating and defense in depth.

But crowdsourced databases are another matter. Their very purpose is to accept small simple snippets of data from millions or billions of small-CPU devices. Via protocols built to publicly published well-known IT standards.

Recognizing and filtering out attempted blackhat inputs to those data bases is itself a non-trivial AI problem.
I’m reminded of a line from Colossus: The Forbin Project. One scientist says to another “But if we take away it’s weapons it’s just a glorified adding machine.”

The converse applies here. By connecting our autonomous machines, such as cars, to our crowdsourced datgabases we’ll finally be connecting real-world “weapons” to our blackhat susceptible adding machines.

Which will have adverse consequences.

Late add … Let me clarify something.

I’m using the word “crowdsourcing” as shorthand for the underlying technology of centralized databases updated by very large numbers of non-trustworthy simple distributed user processes, be they human or machine.

I’m *not *referring to the social phenomenon of people offering observations and opinions, be they true, deliberately false, or misinformed.

One the sidewalks/in sensor range. Yes if its in front of the car, it stops.

The approach could make all the difference in the world, I am working under the assumption that a car if having a hard time seeing something “asks” a driver to verify a give update. You example with overriding speed limits can be handled by reducing the number of inputs per user per unit of time. One car identifying one sign per hour/trip would cover alot of signs in a few weeks. I seriously doubt that someone is going to sit around a write a script to emulate hundreds of cars slowly submitting bad data to a given section of street over weeks or months, up against hundreds of not thousands of hits to the contrary.

I think you are right about this being where out miscommunication comes from: i am discussing how self-driving cars can be made to work with existing infrastructure, since upgrading all the infrastructure in the US, much less the world, isn’t going to happen anytime soon. Self-driving cars will need to be able to drive even in areas with no cell phone reception, for example. So while feeding it constant updates on weather and road conditions would be great, the car has to be able to function without them.

It would be better than human drivers, in my experience.

This is a helpful clarification. I guess I don’t understand why Google would consider data from self-driving cars it designed untrustworthy. Can’t it encrypt and secure the data sent from this car automatically and judge it to be more reliable than user reports?

And, can’t Google also find contemporaneous data submitted from a passenger in one of those cars more trustworthy than data that is apparently submitted from 500 random ghost cars a hacker creates on a botnet? Perhaps the inputs could even be limited to cases where the car asks the user for clarification. If the car asks, is that a traffic light, and the first passenger answers “yes,” the system has some greater confidence that there is a light there. When the next Google car passes the system, the passenger will be asked to confirm whether there is a light there. After three or four passengers in different cars confirm the light, Google can be pretty confident there is a light there.

Again, I concede that securing the self-driving car maps database will be a challenge, but so is securing Gmail. I just believe that Google, and I hope other self-driving car makers, are up to that challenge.

I think the LA Times reported that blackhat Waze users sometimes try to report fake traffic accidents on their streets to keep Waze from recommending their road as a cut through during times of congestion. Waze said this wouldn’t work, and I can see why. Waze has plenty of data from other users showing traffic moving smoothly through the streets notwithstanding the purported “accident.” The accident reports are basically ignored by the routing algorithm. When I have slowed down on the road, Waze has also prompted me at times with a question like “Are you stuck in traffic?” Waze would probably view these reports as more reliable than reports generated by a person standing by the side of the road with a phone pretending to see an accident.

I don’t believe people driven cars will ever entirely disappear.
I also think that once true self drive becomes viable the advantages will be such that there will be incredible pressure for a speedy change over.
And further to that…even for areas that are not suitable for autonomous cars…The size will shrink and shrink.
I know that one current problem is the level of dEtal needed for mapping.
But as to speed of change specifically…my idea of fast would be 90% autono mouse in 20 years.

I can’t see going to 90% self-driving in 20 years. You’re going to have to wait for a generation of drivers to die off to see that level of penetration. But when you’re a kid who gets driven to school alone in a self-driving car, and then you become a teenager who goes to the mall in a self-driving car, and then you become a young adult who goes to the bar in a self-driving car, you never have that “YOU MUST LEARN TO DRIVE TO BE A FULLY FUNCTIONING ADULT” moment. So once SD cars become common you have a generation of kids who never learned to drive and who will never purchase or operate a human piloted vehicle. That’s what will really lock in SD cars. But you’ll have to wait a generation for the young adults of 2016 to die off before people who drive their own cars are seen as eccentric. And getting a driver’s license might become much harder and more rigorous, and you really would have to demonstrate above-average skills before you’re allowed to pilot a car.

Im pretty sure that the transition will be driven hard by insurance. You will start to see insurance rates increase for human driven cars and clauses prohibiting manual driving and accidents other than little parking lot bumps will not be covered without a “manual driver” package on your policy.

Like alot of other things, people talk alog of shit until they have to put their money where their mouth is. Having a $100-200 a month difference in insurance premium will get alot of people rethinking their point of view.

How many people do you see texting, reading a GPS or talking on a cell phone today? Some studies show that 35% of actual accidents are caused by distracted driving. The % who luckily avoid crashing while driving distracted is much higher – it just doesn’t get reported.

So a significant % of people are already trying to use a self-driving carthey just don’t have one.

Also self-driving does not equate exclusively to the entire trip must be self-driven. It is also classified as self driving if the driver temporarily engages the autopilot while making a phone call or looking up a restaurant on a GPS.

If reliable, affordable self-driving cars were available today, a high % of people would use that functionality to some degree.

Talk about your early adopters.

That reminds me of a program I worked on where someone decided to use a multiplexing standard that hadn’t actually been implemented yet, and by the time the rest of the vehicle was nearing Critical Design Review it became apparent that the proponents of the standard abandoned efforts to make it work in hardware, which essentially crippled the program and would have required a near clean sheet redesign. It was a blessing in disguise because it was a dumb idea to begin with, answering a question nobody intelligent would have asked in the first place.

Back to the topic, someone will figure out a workable solution to the problem of auto-navigation of unmarked road hazards, and it may not be what was originally intended but it will probably work better than specification once it becomes widely deployed just because clever people can’t help but tinker and innovate. We’re certainly not limited to the current state of the art.

Stranger

I would expect that the percentage who avoid crashing is much lower. If it were higher, then that would imply that texting while driving improved safety.

Yes, there are more people who drive distracted than who crash distracted. But there are a lot more people who drive undistracted than who crash undistracted.

$$$ and rules are the unknown in all of this –

But speaking of myself only

I’m born in 1974.
My first car was a 1969 Morris 1100 (manual) to be replaced by a 1974 Galant Coupe (manual). I enjoy driving, and relative to my peers do a lot of it.

But all other factors being equal - there’s no convincing argument I would be able to make for buying a car I drove myself vs an autonomous car. There’s simply no argument - the ability to read to the kids while going to school instead of driving? Checking emails on the way to a meeting? Generally speaking - I may spend between 5 and 10 hours a week driving to meetings - if that time could be spent working instead of driving?

Already since Monday my trip meter is showing 10 hours of driving, most of it city driving - what sort of an argument can I make that I need to do this vs a car that can drive itself? Safety and cost are the only ones that work - once this is sorted, there is nothing that can be said.

Already, as a means of comparison -
In my driving lifetime - cars have gone from automatic transmissions are crap to performance, and manual is the only choice to now manual cars being phased out. In top end sports cars, sequential transmissions were really only introduced 10-15 years ago, but now manual is only offered for the luddite enthusiast, most serious sports cars don’t offer manual for anything other than “fun”

Reminds me of the apocryphal story of the idiot in a rented RV, who set the cruise control and went off to make the coffee. Cruise Control as Auto-Pilot | Snopes.com

And you’re spot on with all of that. I too enjoy driving. I have an enthusiast’s car. But if I could automate the 45-minutes (non-traffic) drive to & from work I’d do it in a heartbeat.

The vast majority of folks consider driving a necessary chore like brushing their teeth or paying the bills. Those folks will jump at the chance to not have to do it.

If a person lives rural and therefore all their friends and neighbors live rural too, they’re not going to see the need. Nor are they going to have the opportunity for a long time to come.

But the vast majority of people who live in urban/suburban settings will adopt self-driving cars as fast as they become available as a mainstream option.
One of the biggest obstacles I see is any thought by the engineers or lawyers that people can be effective monitors, watching & waiting to take over if HAL goofs or encounters something he/she/it isn’t programmed to handle.

Nope, that ain’t gonna work. Either HAL is driving through any/all circumstances, or the people are. Yes, we can do things like self-driving only on freeways & human-only driving on surface streets. Maybe even HAL drives on all public roadways and humans drive in parking lots & such. Or vice versa. But situationally switching back and forth in real time when the going gets tough will not work.

Aviation has pretty conclusively proven the risks of humans as monitors to highly, but not perfectly, reliable processes. Even with professionals adhering to strict procedure and subject to the peer pressure of always being monitored by both another person and by the boss remotely, we still have challenges when the automation goofs or decides the current situation is beyond its capability.

And in most of the flight regime where automation is extensively used the pilots have many seconds to intervene before stuff gets critical. With cars things often go from normal to critical in a couple seconds max. Just because everything and everyone is so close together.

A corollary to this is that early self-driving cars need to be really, really good. Not just so-so. Which may prove to be the biggest obstacle to getting from here to there.

LSLGuy, you are exactly right. What we need are “Level 4” self-driving cars that require no input from the passengers from their starting point to their destination. At the very least, we need cars that know ahead of time when their passengers will need to intervene and give them plenty of warning to wake up, put away the phone, etc. so the passenger can help with the last foot of parking. These cars will also need a safety mode so that even if users fail at this simple bit, the cars will not cause traffic jams or accidents.

Unfortunately, what we are seeing now are greater autonomy features added to cars like lane-keeping assist, blind-spot monitoring, radar assisted-cruise control, automatic braking, and whatever Tesla has on the Model S. These features likely increase road safety but in truth, cars with them are only incrementally more autonomous than old cars. Manufacturers still assume that the drivers are paying close attention. I think one of the biggest risks of self-driving cars is that their benefits will be oversold.

At some point, there will be a race to market fully autonomous cars. The best ones will work as they need to, but other manufacturers will market some combination of these safety features as “99% autonomous.” Journalists are already saying things like this about Teslas. Consumers facing a bewildering array of autonomy levels may misunderstand the technology. It’s already happening. One Volvo struck some journalists because a car being demonstrating lacked the critical pedestrian detection system they were relying on and the system wouldn’t have worked under the conditions the demonstrators created in any event. (Volvo Mows Down Journalists During Demo, Driver Didn't Buy 'Pedestrian Detection' Option - CBS San Francisco)

Headlines like “Autonomous car kills 20” will be easy to write when the real problem was “Ignorant consumer fails to read the manual and acts stupid.” The apocryphal RV story is just waiting to become reality.

I’m not so sure. My mom and I are both urban, and as a result, if one of us wants to visit the other, it’s a ten-minute drive. That’s not much of a chore: Honestly, it’s probably more work to get the dog kenneled and the house locked up and gather up whatever the reason for the visit is. But in some rural areas, a “neighbor” might be an hour or more away. And it’s a lot more useful to free up an hour of the car-user’s time than it is to free up ten minutes.

The engineering world has a pretty standard solution to such situations (granted, not an option in your speciality)

Brakes are on by default and must be released by “The path ahead of me within my stopping distance is clear = yes” “IF in doubt, slow and or stop.” The entire world of amusement park rides works on this principle, even to the point that the braking systems are held “off” by power, the vast majority of failures in the system result in the brakes engaging. The situations where superior driving skills will outperform the computer will be very small.

Am I the only one to see the (rather large) problem with Default=On with brakes?

Zipping down the road at 70 mph is NOT the time for a hiccup causing “Invoke Default”.

Unless every other pod for the 10 miles behind also slam on the brakes simultaneously.

Not a good idea.