Self driving cars are still decades away

More completely, you need to watch it continuously and begin actively driving whenever it’s about to make, or is already making, a mistake. Which mistake may be slowing or not slowing, accelerating or not accelerating, turning or not turning, waiting or going, etc.

Whether it’s realistic enough to expect the general public to be diligent enough and skillful enough to do this task reliably enough is the $64Billion question.

The cruise control in my other car doesn’t even try to stop when there’s an obstacle! The driver has to continuously pay attention and be ready to override it at a moments notice. How could GM have deliberately released technology that isn’t ready yet? The car is over 20 years old, and they still haven’t fixed that bug!

Cars are inherently dangerous and driving is a task that requires constant attention and input from the drivers. None of the driver assist or (so called) self driving features in Teslas change that (see the thread title). The car and documentation are very clear about that when enabling self driving and driver assist features.

Whether or not communications from Tesla’s executives have misled people is a legitimate discussion for a different thread. Whether or not driver assist and self driving features are more or less safe than unassisted human drivers is an empirical question that can be answered with studies and data. Evidence suggests that driver assist features are definitely safer, and Tesla claims that the self driving features are also safer, but of course they do.

Covering the brake would be a miscalculation of risk as it relates to base rate of events. In nearly 50,000 miles (probably 20,000 or so with some form of adaptive cruise control maintaining speed) I’ve never had the car fail to stop for an obstacle. However, in just a 10 mile highway trip I might have 1-2 phantom braking incidents. Also, I’m paying attention, so when I see slower traffic ahead, I move to cover the brake.

I’ve also never had a phantom braking incident where the car came to a complete stop. That seems extremely unusual to me. In instances when I’ve known it was safe, I’ve let the car slow down just to find out what will happen. Typically that will be something like slowing from 70 to 60 (for no good reason), and then accelerating back to 70.

The problem is that something like full self driving can’t really be tested other than putting it on the streets, since the problems stem from unforseen edge cases. That’s the whole purpose of beta testing.

If regulatory agencies wanted to test the software, they would probably build some sort of test environment for the car, and then manufacturers woild focus on passing the test, which isn’t the real world and likely wouldn’t solve much.

This is a thorny problem. You need beta tests for software like this (critical parts of which aren’t even inspectable code - they are a giant neural net), but beta tests on open roads put others at risk who aren’t part of the beta.

One part of the answer is education. Tesla shouldn’t be calling their system ‘full self driving’ until it can drive itself fully. Something like ‘advanced driver aid’ or ‘advanced lane and distance assistance’ would more accurately capture the current ability of the software.

America is heavily regulated. Lack of regulation isn’t the problem.

Evidence suggests that certain driver assist features, like electronic stability control and automatic emergency braking are safer. It might even be true that Tesla’s self driving is safer but Tesla and Musk are the only sources of that data and they are not reliable arbiters of it. I recall reading an article that I can’t find now that suggested some number of accidents were excluded from Tesla’s self-driving data because the car terminated self driving before the accident – but only after the accident was unavoidable. Playing with data like that could skew whatever is coming from them.

I disagree and your limited experience doesn’t convince me otherwise. The severity of a failure to stop incident is likely to be far greater than the severity of a phantom braking incident. Of course, if your car plows into a fire truck stopped in a lane of traffic, you might be too dead to notice.

The car can’t get into a failure to stop incident unless I haven’t been paying attention. There’s a distance where I’d expect the car to start braking, another distance where it becomes urgent but not critical, and finally a bare minimum distance. There is plenty of time even before point B to hit the brakes if I feel the car is late (in practice, it hasn’t happened yet).

There is, for example, Mcity, a 32-acre testing facility on the campus of the University of Michigan.

Closed courses and planned tests do not uncover the unknown unknowns, which is what beta tests are for. Where self-driving fails is in situations that no one thought of, and where there was scant training data. A woman on a bicycle covered in shopping bags. A semi stopped across a road in front of a hill. Strong winds blowing crap across the road. A million things no one has thought of at all. Strange reflections, whatever.

You can’t think of everything, so you put the product out with the public and let them use it, and record results. That’s why we do beta testing. The problem is that the beta tests are dangerous to others. I’m not sure there is a way around that. At least Waymo has a professional driver in the car whose job is to supervise the AI, but even they have had accidents.

Hahahaha come on now.

Seriously lads, a joke’s a joke but. Come. The fuck. On

“Yeah, you can set this self driving car that can’t self drive to automatically exceed the speed limit, of course you can. Wait, are you saying that might be a bad idea? Look, it only hit a few police cars and we’re pretty sure we’ve fixed that now, so just be a bit chill, huh?”

Here is a gift link straight to the NYT article. Well worth a read.

I don’t know if this has been posted before in this thread, I’m not up to reading 1000+ posts to see, so if it has, consider this as merely support for the concept.

I can’t be the only one who is thinking the auto industry is trying to bite off way too much at one time with this self-driving concept. You get people used to the concept of automated controls before you force them to trust them. It seems simple, to me.

GM or Ford needs to work with large grocery store chains to offer an “auto valet” feature. That is, the store would set-up a special area in the parking lot for “self-parking” cars. You’d drive up to the front door of the store, get out and tell the car to go park. The parking lot would be set-up with special markers that would guide the self-parking cars to the special lot, where humans would not be permitted. Only having to deal with non-humans, the car could then find a suitable parking space and wait.

When the customer had finished shopping, they’d call their car on the phone and it would drive up to the receiving area to pick them up and their purchases. Queuing of pick-ups would be handled by AI which would notify the customer when their car would be ready, which would back into the loading zone (like big rigs) so loading would be quick and efficient with both space and time. With, say, 10 slots and giving each customer 5 minutes to load their vehicle, that gives a 30 second turn-around time for the slots. Once the customer was back in the car, the car would then self-drive out of the restricted area where the customer would have to take control of the vehicle to continue.

Of course, the benefits for the grocery chain for doing this are obvious, making it much easier for the customer to get into and out of the store (think about inclement weather). The automaker would have a great selling point for their new cars. Other stores and businesses would rush to join in (imagine going to a concert or sporting event and simply waiting in a coffee-shop for your phone to tell you your car will be ready in five minutes (and leaving in six, whether you are there or not). Since the traffic would be controlled by the AI, at least until you are out of the parking area, traffic jams caused by everyone wanting to leave at the same time could be much reduced or even eliminated.

Then, after a decade or so of “auto-valet” the public would have much more experience of how to behave around self-driving cars, a much larger percentage of cars with self-driving features would be on the road, and limited-access highways available only to self-driving cars could be designed and introduced to the public. By that time, the issue of liability for injuries and property damage would have worked-out by the courts and legislative standards for the technologies involved would be in place.

Baby steps.

Thank you!

There’s a lot of good stuff in that article, but one odd thing that jumped out at me is: this system should be in every car, self-drive or not:

After a minute, the car warned Key to keep his hands on the wheel and eyes on the road. “Tesla now is kind of a nanny about that,” he complained. If Autopilot was once dangerously permissive of inattentive drivers — allowing them to nod off behind the wheel, even — that flaw, like the stationary-object bug, had been fixed. “Between the steering wheel and the eye tracking, that’s just a solved problem,” Key said.

Although…

Still, he knew people who abused the system. One driver tied an ankle weight to the steering wheel to “kick back and do whatever” during long road trips.

An auto-valet feature would be even easier to implement in a company depot for commercial vehicles. It would save a lot of accidental damage and make the drivers job far easier if the vehicle turns up at the reception in time so they just drive it away.

But, while this kind of innovation may impress bean counters in corporations in charge of fleets of vehicles, the PR focus of much of the auto industry is the premium personal vehicle market that provides personal agency to an individual. That is the noisy bagatelle that gets all the attention with no shortage of opinions. It is also the part of the market where brands try to out do each other. It is all about performance, style and innovative features.

Tesla and other new auto companies have a strategy that targets the premium auto market for sound reasons. It is easier to make profit and get free publicity and, importantly, to raise investment. Most of the self driving stuff is about raising capital because it suggests it might open up big new markets like self driving taxis. But it is absurdly ambitious to deal with any driving scenario. Too many variables.

Getting delivery vans to park neatly in a depot and recharge is the kind of innovation the big logistic companies are looking for. It is a very different set of customers. If it works and saves money, there will be companies working on it.

lol

Mercedes’ Drive Pilot system can, “on suitable freeway sections and where there is high traffic density,” according to the company, take over the bumper-to-bumper crawling duties up to 40 MPH without the driver needing to keep their hands on the wheel.

This is in Nevada: surely few people drive over 40 mph there?

It’s basically rush-hour assistance.

I mean, I already pretty much do that with my wife’s Hyundai Ioniq. I still have to pay some attention to people cutting in, but for the stretch from my kid’s school to my off-ramp (~25 minutes), I basically let the car drive itself. And it’s not some fancy-schmancy car – just normal lane centering and adaptive cruise control. Been doing it for over a year now, and it’s made rush hour that much more relaxing. I do have to nudge the steering wheel with my thumb every once in awhile so it doesn’t scream at me.

Apparently this requires slightly less attention from the driver than other cars. Honestly, though, it feels like just splitting hairs, and that someone got some bullshit past the regulators.

It’s been 20 years since I lived in Las Vegas. But I did drive there for a couple days 2 years ago. “Rush hour traffic” seems to be pretty much absent on the freeways there. It gets dense so folks are doing 50, not 80, but it doesn’t get slow-and-go or stop-and-go. At least not much and not for any appreciable distance.

The main surface streets, and especially the Strip, are another matter. That can be bumper-to-bumper much of the day & night.

As a very casual observer of AI, who would really like a fully self driving car in 10 years or so as I start to get old, I find it interesting how big two other AI leaps forward have recently seemed (stable diffusion etc for making images, and chatGPT). I kind of doubt they’re at all similar in any way, but it does highlight the extent to which sometimes progress can seem lightning fast. Knock on wood that that will happen soon for self-driving.