Self driving cars are still decades away

Here’s a simple problem in my own field that computers have a very hard time with:

You’ve got a 2-story apartment building at 123 Main Street, with apartments 1-10.

Next year, the building renumbers the ground-floor apartments 101-105, and the second-story apartments 201-205.

You and I know that they’re almost certainly the same ten apartments, renumbered.

Trying to teach a computer to consistently recognize this sort of renumbering, and distinguish it from apartments 1-10 no longer being in existence, and apartments 101-105 and 201-205 being new residences, is a very hard problem. People who maintain sampling frames for large household surveys, and people who use those sampling frames, would really love to see it solved. (“Double chance of selection” is a dirty word in survey sampling.)

Driving is rife with recognition problems of this sort, where recognition of a situation is a cinch for humans but extremely challenging for computers. This is why I’m skeptical that AVs are just around the corner.

Unfortunately if I had tried your first scenario the number of post 3 PM meetings would mean I’d get in at 6 and leave at 6. For the second, 9 am - 10 am is still the heart of rush hour. 8 pm okay. I had one job where for a long time I arrived at 8 am and left at 9 pm. No traffic on the drive home, but no life either. I quit.
I understand that traffic is a lot worse now then when I retired a bit over a year ago.

Here is an article from the Register on a proposed British Law on insurance coverage for self-driving vehicles.
In the comments section the issue of liability for accidents occurring between when a problem is found and when an update to the car software is available is discussed.

This does create a massive logistics and ethical issue once these things are common.

Assume that autonomous cars are common, and on average, they have a crash death rate 1/10 that of the average driver. These are short term goals, and if you believe Tesla’s claims, they are already at 1/3 the death rate and falling.

Say a software flaw is discovered. The cars still have the average death rate of 1/10, but if it’s driving in a dust storm in kansas between the hours of 7 and 8 pm, all bets are off, the cars are a menace to their occupants.

  1. How do people get to work? In such a world, an increasing number of people wouldn’t even be licensed.

  2. A human driver who hasn’t been driving for several years is probably going to be worse than letting the computer drive, even despite the known software flaw.

It creates a huge problem as autonomy fleets sit stranded waiting for updates.

Uh… SAP has a field in its materials for “replacement material”. I’ve used them in several instances with beautiful results, but people not familiar with the mechanics seem to think I’m proposing the kind of magic which involves sacrificing newborn children, kittens and puppies to the Computer God over the corpse of Bambi’s mother.

This sounds to me like it’s that same problem. When someone wants to look for data on 1-10 before the “replacement date”, the data is labeled 1-10. When someone wants to look for data on 1-10 after the date, the program helpfully provides the new label (reports could have a format “newlabel (prev. oldlabel)” if so desired). Unlike the replacement materials you don’t even need to track separate stock of the two materials and make sure you finish using the old one while only acquiring stock of the new one.

Now, if you want the program to recognize it automatically… I’d like you to meet several million people who still can’t get to learn the “new names” of places whose name was changed over 20 years ago, or track that yes it’s the same street, not a parallel street. Not only a computer problem.

The flaws are probably going to be for very obscure situations, so very few of the cars are going to be affected (the non-Kansas ones.) And even in the cases where there could be a problem, it will be unlikely to affect much - it is not like the dust storm will cause the car to explode.
We are already in this situation - I got a firmware upgrade on my Prius that stopped it shaking a bit when I went over certain types of rough roads. Over the air upgrades are going to be much faster, and are already in place for Teslas.

I think the concern was more flaws which most people are not aware of and therefore can’t guard against.

I live in a “non-Kansas” situation. I get 30 feet a snow a year at my house. I’m also a GIS programmer. I make road centerline data (among other data).

A fully autonomous plow truck for instance, for my drive, is a LONG, LONG way off. As is a vehicle that will REALLY know what I need it to do. A completely localized GPS system that will read snow depth, measure how deep snow is plowed and where, and understand the quality of the snow, and how compacted the snow storage piles are may someday work, but it’s a long way off. These vehicles are going to have to be able to be able to be taken off autonomous control.

It’s not just take me to work. Or Chicago. It’s back up close to the fire pit, but not too close. Drive over that field over to that spot. But watch that small baseball sized rock hidden in the grass. Back up around the house to the deck. Sure, voice commands could work (little left, little left, gonna have to gun it to get through that snow, right, right, slow, left, right). But really, it’s much easier to use our hands and feet for all of that.

Yep, I know that I live in a rare situation. But taking control from drivers, is going to make them worse. And when they find them selves in an odd situation where the car just says ‘no’. The human driver won’t have the first clue what to do.

I was talking about bugs in the programming, not inherent capabilities. I’m sure that there are tons of applications where self-driving makes no sense, is too complicated, and are niche enough to make the programming not worth it. You won’t find many in Antarctica either, no doubt. A true AI could be taught how to drive, but these things are not true AIs.

Updating this thread:

Having the confidence that the technology will be ready for that is one ambitious thing (ride-hailing is generally done in limited geographies and can be comprehensively mapped at least); having that confidence in the legal structures being in place by then? I dunno.

This thread is less than 2 months old; today I saw this: Insurance Companies Are Now Offering Discounts if You Let Your Tesla Drive Itself

The article clarifies:

But also notes:

That seems great but “future” is a pretty loose term, eh. Still, I have no doubt that not only will these not be the only two companies offering discounts for using the auto, I think the discount will increase pretty steeply once there’s data with some meat on the bones, so to speak.

Note again that this is happening now, less than 2 months after the OP was made. The future is here and if it isn’t here, it’s heading our way fast. Very fast.

What does this mean? If vehicles using autosteer have 40% less accidents than regular cars, then they are still having a lot of accidents.

The public expects self driving cars to have ZERO accidents (or nearly zero.)
I vote for the OP–we are still decades away from level 5 self driving cars.

But level 4(?) vehicles will be here sooner than that.
Maybe taxis working within very limited areas; or long-distance trucks that drive a thousand miles on the highway by themselves, but then rely on a human driver for the final hour of city traffic and unloading cargo.

This thread is less than 2 months old; today I saw this: Insurance Companies Are Now Offering Discounts if You Let Your Tesla Drive ItselfThe article clarifies:But also notes:That seems great but “future” is a pretty loose term, eh. Still, I have no doubt that not only will these not be the only two companies offering discounts for using the auto, I think the discount will increase pretty steeply once there’s data with some meat on the bones, so to speak.

Note again that this is happening now, less than 2 months after the OP was made. The future is here and if it isn’t here, it’s heading our way fast. Very fast.
[/QUOTE]

Not until almost all cars are self-driving. I don’t think people expect a self-driving car to get out of the way of some clown who plows into it on a crowded exit ramp.
Accidents will get covered more heavily in the media because they are man bites dog stories, but that will taper off.

Then “the public” are idiots. If they don’t consider “1000 accidents a year” to be better than “2000 accidents a year,” they’re not being even “fake news” levels of irrational, they’re literally failing at understanding “greater than.”

The proper response to such people is to ignore them. If you don’t, you’ll never be able to sell a new product or service again.

Shouldn’t someone who has never had an accident while driving reasonably expect that they continue having zero accidents in a self-driving car?

No more so than they should reasonably expect to continue having zero accidents driving themselves. “I’ve been lucky I haven’t yet been in an accident caused by someone else’s actions that I couldn’t get away from” doesn’t change the laws of physics or probability.

Right now, for limited circumstances, these cars drive better than GOOD drivers, not just average drivers. Those circumstances will become more and more limited over the next few years, until they cover 99.however many nines percent of all driving situations. They will never have zero accidents, and neither will “people who have never had an accident driving reasonably” as a population, unless you stop counting them when they have their first accident.

Nearly all drivers, good or bad, will eventually be involved in an accident. The number of people who go 50+ years driving and are never involved in one is vanishingly small. That percentage will likely go up–dramatically–as we switch to self-driving vehicles.

The other factor to consider, though, is the economics of installing self-drive as a standard component rather than option.

Automatic transmission, once a high-priced option in the 1940s, soon got so cheap to install, that it became nearly universal in the USA, and now you have to order the manual option to get a car without it. Similarly, the cost of equipping a car with self-drive may become so low, every car will have the capability, even though many will be driven their entire lifetime without applying it.

Then we’ll be pretty much stuck with it, and over the decades, traffic configurations will gradually evolve into self-drive capable, as commonplace as high-speed exit lanes.

Coming back to this thread… Look, much of my skepticism arises from the fact that part of my job, without going into too much detail, involves assessing the impact of AVs on a handful of industries. Five years ago I felt like I was scrambling to keep up with the possibilities and our ability to figure out their impact, but over time I realized a couple of things.

First, OEMs and labs really overhype their testing. I remember watching a car cruise through Berlin and around the Brandenburg Gate years ago and thinking holy crap that is amazing, only to later learn how many test runs they’d done in advance to map the course, how they picked the time of day, etc. I’ve seen many similar since. There is a ton of capital flowing into this, and the hype is probably necessary.

Second, I’ve watched timeline and Level announcements slip over those same years, as auto executives either push back rollout dates, or modify what they said to require more human involvement. They also sometimes blur the line between L3 and L4. A cab cruising around Phoenix picking up passengers is truly impressive, but if there’s someone in the backseat waiting to hit a panic button, it’s still L3 until they put the vehicle on the road by itself.

A few years ago I got to test-ride in an L3 vehicle that was about to be rolled out (I bought one later!), with one of the engineers in the passenger seat. I was completely blown away at what it could do, but he also talked about the complexity of making incremental improvements, and then integrating all those improvements with each other. I didn’t think to ask him how far off he thought L5 was, but I wish I had.

And back to the original point, there are a LOT of OEMs doing testing in the snow right now, and I’m sure my skepticism will eventually put me behind the times, but none of them have announced success yet, so it can’t be that easy. Time will tell.

The timelines being pushed out do seem … ambitious. I’d completely dismiss them as hype except that the investments in production capacity are being made now. Ford’s planned 2021 release (and they have stated they are going straight out to level 4, for ride hailing in predefined areas) comes with announced plans that Flat Rock starts getting its ramp up to do it invested in now. GM saying 2019 means they have to already be building the capacity and arranging the supply chains. These companies aint Tesla -they have their ducks in a row, in redundant rows even, before they pull the switch. Multiple others are claiming level 3 anyway by 2020 to 2021.

Maserschmidt re that Phoenix cab - what if the panic switch is exclusively there for the psychological comfort of the rider? Is that level 3 or 4?

Also for level 3 vehicles - could the range of conditions that the car take completely over expand as the vehicle self-maps the common routes the passenger takes? It is easier for us and AIs too I’d think to recognize a snow covered sign as a stop sign if past trips have taught them to expect one to be there. Sort of the same process that gets me to work “on auto-pilot” as it is.

Let me give a cynical answer first [but I only give it about 20% weight] - boards of directors of traditional auto makers are in panic mode watching all these disruptors coming after their capital, so they’re going to exhibit progress, in some way, by necessity. BUT - I absolutely agree with you that their existing infrastructure gives them a huge potential leg up [that’s the other 80% weight], and I believe they are taking it seriously.

As I’m responding I’m trying to find a long article, I think it was in Crain’s, that featured an interview with an executive (Ford?) who I thought had a great balance of optimism and realism. If I can find it I’ll come back and post it. (but it wasn’t the article where the GM exec said Elon Musk is “full of crap” :slight_smile: )

That’s interesting, so like an AV version of Google street maps? That would make huge sense, if you could get all the auto makers to collaborate on it.

On the panic switch question, I’ve never thought about it. Presumably even L5 would have some kind of Emergency Stop button…?