Something that occurred to me whilst driving today.
How polite are the self driving systems? Say the slow lane has a stopped bus ahead in it, or there is a lane restriction. Will the automatic systems let other cars into their lane in a polite manner, or are they self centred bastards? How well do they behave when they need to merge in?
Also brings to mind the question about merging on roundabouts. Driving in merging traffic on roundabouts requires some pretty reasonable care in watching multiple traffic streams and predicting flows. Especially multilane roundabouts. Such things are pretty common outside of the US.
One of the skills one gets taught learning to drive is to monitor cars more than one ahead of you, sometimes looking through the rear window and out the front of the car ahead. (I drove a Mazda MX-5 (Miata) for 19 years, you got very good at keeping track of cars all around you with such tricks.)
I remain very skeptical that any system based upon nothing more than a few cameras can manage. My current car, a 2 year old BMW with the full set of driver assist tech, can get confused just doing the basic lane safety stuff.
My experience with Tesla FSD is that it currently properly takes turns when merging. It was kind of a dick two years ago but that’s no longer the case. It handles roundabouts perfectly fine.
But that brings to my mind a related question: How will human drivers behave around obvious self-driving cars? Will the humans bully them, pamper them, or just treat them like ordinary citizens?
Of course different individual humans will do their different individual things. But e.g. if it becomes common knowledge that a e.g. Waymo will always back down if you start to merge in front of it even if there isn’t room, real soon humans, being human, will pull that trick for advantage every time the opportunity presents. Or people will crowd the human-driven car ahead, knowing the Waymo won’t bull into that space like a pickup truck trapped behind a bus might.
Right now for cars other than Waymos, there’s no way to know which cars are self-driving at the moment and which aren’t. e.g. Teslas look the same whether FSD or the human is driving.
My own BMW is similar to yours in age and driver assist features. And yes, it’s far, far short of true self-driving. It doesn’t have the brains to anticipate somebody in the adjacent lane trying to dive into the too-small gap in front of me. But if that adjacent car starts into my lane, it sure backs down promptly and smoothly, making room for the intruder.
You can absolutely fuck with self driving cars if you know what you’re doing. I was in the far back of a parking lot one time showing my mom how the car backs into parking spots. There were three people on the opposite side of the lane hanging out and screwing around. They kept inadvertently making the car stop its movements until I explained to them what was going on. If someone wanted to be a dick, they could have prevented me from parking.
AIUI it’s quite difficult to compare human and self driving cars’ safety records because of differences in the types of driving and severity of incidents and there are many criteria as to whether an accident “counts” for the tally. It’s how tesla manages to claim that their taxis are an order of magnitude safer than human drivers when, based only on what we know, it appears to be the other way round.
That said, most analyses suggest that Waymo are indeed pretty safe, and with how many lunatic human drivers there are, it seems plausible that they are significantly safer than the average human driver. But the data needs to be viewed with a pinch of salt and there may not be one simple ratio for comparison.
My FSD (with computer version 3, I’m stuck one version behind, v13) will move over into the inside lane if it sees someone trying to merge from the side, and if someone comes up behind it (especially those a–h— tailgaters) it will move over into the right lane when it can, even if it was currently going faster than the right lane. Also if possible, move over to the inside lane for a parked vehicle on the shoulder on highways. But a few occasions, it has failed to either speed up or slow down to accomodate a vehicle merging when it couldn’t move over. I presume it does the same with the stopped bus scenario - it will keep going at regular speed, and it’s on the trapped vehicle to find a big enough opening. My biggest concern - and one I often intervene early - is a crowded antique colverleaf, a very short on/off lane over the bridge, whether it will successfully merge into a chaotic group of merging vehicles.
Another category is where FSD gives up and flashes a big red steering wheel “take over” usually in response to something like the front camera blinded by heavy rain or splashes. I suppose it’s critical, but nothing serious is happening in actual traffic.
Agreed we don’t have the data for comparison. And until there is vastly stronger mandated consistent reporting for all accidents, human and AV, we’ll never get directly comparable figures.
AI is going to have different driving foibles from humans. It’s going to be really good at always paying attention, never being drunk, or drowsy, or distracted by passengers or changing audio/climate control settings. This is likely to eventually (not saying we’re there yet) lead to a marked decline in accidents caused by those sorts of things, and realistically this is a large majority of preventable accidents that happen today.
However, AI is going to be weaker than humans in intelligently responding to novel situations. There are going to be instances where an AI driver is going to crash where a human could have easily avoided it, because AI is not actually intelligent in the way that humans are.
You might be right that the public will demand AI drivers be effectively perfect, but it’s unreasonable both in the expectation that it could be possible for AI not to very occasionally make dumb mistakes a human would never make, and in the position that AI drivers shouldn’t be allowed to drive if they make such dumb mistakes, even if for every dumb mistake that causes a human injury the AI makes it avoids a dozen dumb injury-causing mistakes that humans would statistically have made if driving the same route. Refusing to prevent 12 injuries because 1 unlucky sod is going to get smoked by a robot is just refusing to pull the handle in the trolley problem.
I have wondered at the apparent difference between the Waymo sensor suite (which is distinctive and visible from a block away) and the Tesla sensor suite (which is not apparent to the casual observer). A description of the 6th generation Waymo sensor suite from their website:
With 13 cameras, 4 lidar, 6 radar, and an array of external audio receivers (EARs)
This is way more than a few cameras. How does this compare to the Tesla sensor suite?
Some commentors remark on the lack of lidar in Teslas, but I have yet to see a situation where lidar would have improved what visual detection by cameras does. The worst case is that the two disagree; I would think an improvement would be two front cameras, and new Teslas will have a camera on the front bumper (which then requires a means to clean it - a dribble/spray)
Earlier Teslas had sonar sensors that they used for parallel parking. They removed them once they figured out how to do it with vision (kind of - there was a bit of a gap)
I’ve had the very occasional problem where FSD wouldn’t start when I was in a very dark area on a very dark night.
It may be irrational to expect AI to never make dumb mistakes that a human would never have made, but humans aren’t rational. And there’s a rain the trolley problem gets a of play.
Realistically, AI will always make mistakes humans wouldn’t have made, so they will have to be enormously better drivers on average before legislators or the public will trust them.
As opposed to a system based on only two cameras right next to each other? Why would it be harder for a computer than for a human?
And I wouldn’t necessarily conclude that humans would do better than computers in novel situations, either. Sure, maybe a human could do a better job of processing the information and come up with a better response… eventually. But in driving, it’s far more important to respond quickly than to get the perfect response, and computers are much quicker than humans. In fact, I would posit that, in a truly novel situation, the human would almost always fail. If a human succeeds in a situation, it’s because they anticipated it, and practiced for that situation, but if humans can anticipate it, so can the programmers of the computers, and it’s not novel any more.
I’ve had it start whining I wasn’t looking at the road. That was on a pitch-dark night in the countryside on a deserted interstate. I eventually figured out my hand on the wheel was blocking the only light - from the screen - that made my eyes visible to the interior camera. In the same circumstances it complained about degraded FSD because one side camera wasn’t working - it was, but things were so dark there was nothing discernable. Fortunately, headlights meant the front camera had no problem.
My Tesla - late 2018 - has a radar behind the front bumper. I gather during a time when the units were in short supply after covid, Musk decided that unit was redundant for the driving programs to use. They switched to vision-only, and that device is not used. I read somewhere if you take your car in for servicing, they unplug it as an unneeded battery drain.
I suppose Waymo’s Lidar is at least giving a different view, physical objects rather than relying on image interpretation. I have to wonder what the range is.
While humans can drive using only visible-light vision, and therefore there’s proof that it can be done using only cameras, I don’t think there’s any reason to think that only cameras would be better than cameras plus other sensors (infrared, lidar, radar, sonar, etc.). Choosing to go only cameras looks like a clear mistake, to me.
That basically boils down to a simple question - when is the problem that image interpretation is incorrect or flawed, versus when does the program not choose a correct action based on the correct assessment of the obstacles and motion around it?
Humans have much much better image processing than computers. So despite having only two cameras right next to each other, humans usually have a better idea of what’s out there.